serf-0.6.4/000077500000000000000000000000001246721563000124665ustar00rootroot00000000000000serf-0.6.4/.gitignore000066400000000000000000000005051246721563000144560ustar00rootroot00000000000000# Platform .DS_Store # Compiled Object files, Static and Dynamic libs (Shared Objects) *.o *.a *.so # Folders bin _obj _test pkg/ # Architecture specific extensions/prefixes *.[568vq] [568vq].out *.cgo1.go *.cgo2.c _cgo_defun.c _cgo_gotypes.go _cgo_export.* _testmain.go *.exe *.test # Website website/build/ .vagrant serf-0.6.4/.travis.yml000066400000000000000000000004431246721563000146000ustar00rootroot00000000000000language: go go: - tip install: make deps script: - make test notifications: flowdock: secure: fZrcf9rlh2IrQrlch1sHkn3YI7SKvjGnAl/zyV5D6NROe1Bbr6d3QRMuCXWWdhJHzjKmXk5rIzbqJhUc0PNF7YjxGNKSzqWMQ56KcvN1k8DzlqxpqkcA3Jbs6fXCWo2fssRtZ7hj/wOP1f5n6cc7kzHDt9dgaYJ6nO2fqNPJiTc= serf-0.6.4/CHANGELOG.md000066400000000000000000000246631246721563000143120ustar00rootroot00000000000000## 0.6.4 (Febuary 12, 2015) IMPROVEMENTS: * Added merge delegate to Serf library to support application specific logic in cluster merging. * `SERF_RPC_AUTH` environment variable can be used in place of CLI flags. * Display if encryption is enabled in Serf stats * Improved `join` behavior when using DNS resolution BUG FIXES: * Fixed snapshot file compaction on Windows * Fixed device binding on Windows * Fixed bug with empty keyring * Fixed parsing of ports in some cases * Fixing stability issues under high churn MISC: * Increased user event size limit to 512 bytes (previously 256) ## 0.6.3 (July 10, 2014) IMPROVEMENTS: * Added `statsite_addr` configuration to stream to statsite BUG FIXES: * Fixed issue with mDNS flooding when using IPv4 and IPv6 * Fixed issue with reloading event handlers MISC: * Improved failure detection reliability under load * Reduced fsync() use in snapshot file * Improved snapshot file performance * Additional logging to help debug flapping ## 0.6.2 (June 16, 2014) IMPROVEMENTS: * Added `syslog_facility` configuration to set facility BUG FIXES: * Fixed memory leak in in-memory stats system * Fixed issue that would cause syslog to deadlock MISC: * Fixed missing prefixes on some log messages * Docs fixes ## 0.6.1 (May 29, 2014) BUG FIXES: * On Windows, a "failed to decode request header" error will no longer be shown on every RPC request. * Avoiding holding a lock which can cause monitor/stream commands to fail when an event handler is blocking * Fixing conflict response decoding errors IMPROVEMENTS: * Improved agent CLI usage documentation * Warn if an event handler is slow, potentially blocking other events ## 0.6.0 (May 8, 2014) FEATURES: * Support for key rotation when using encryption. This adds a new `serf keys` command, and a `-keyring-file` configuration. Thanks to @ryanuber. * New `-tags-file` can be specified to persist changes to tags made via the RPC interface. Thanks to @ryanuber. * New `serf info` command to provide operator debugging information, and to get info about the local node. * Adding `-retry-join` flag to agent which enables retrying the join until success or `-retry-max` attempts have been made. IMPROVEMENTS: * New `-rejoin` flag can be used along with a snapshot file to automatically rejoin a cluster. * Agent uses circular buffer to invoke handlers, guards against unbounded output lengths. * Adding support for logging to syslog * The SERF_RPC_ADDR environment variable can be used instead of the `-rpc-addr` flags. Thanks to @lalyos [GH-209]. * `serf query` can now output the results in a JSON format. * Unknown configuration directives generate an error [GH-186]. Thanks to @vincentbernat. BUG FIXES: * Fixing environmental variables with invalid characters. [GH-200]. Thanks to @arschles. * Fixing issue with tag changes with hard restart before failure detection. * Fixing issue with reconnect when using dynamic ports. MISC: * Improved logging of various error messages * Improved debian packaging. Thanks to @vincentbernat. ## 0.5.0 (March 12, 2014) FEATURES: * New `query` command provides a request/response mechanism to do realtime queries across the cluster. [GH-139] * Automatic conflict resolution. Serf will detect name conflicts, and use an internal query to determine which node is in the minority and perform a shutdown. [GH-167] [GH-119] * New `reachability` command can be used to help diagnose network and configuration issues. * Added `member-reap` event to get notified of when Serf removes a failed or left node from the cluster. The reap interval is controlled by `reconnect_timeout` and `tombstone_timeout` respectively. [GH-172] IMPROVEMENTS: * New Recipes section on the site to share Serf tips. Thanks to @ryanuber. [GH-177] * `members` command has new `-name` filter flag. Thanks to @ryanuber [GH-164] * New RPC command "members-filtered" to move filtering logic to the agent. Thanks to @ryanuber. [GH-149] * `reconnect_interval` and `reconnect_timeout` can be provided to configure agent behavior for attempting to reconnect to failed nodes. [GH-155] * `tombstone_interval` can be provided to configure the reap time for nodes that have gracefully left. [GH_172] * Agent can be provided `rpc_auth` config to require that RPC is authenticated. All commands can take a `-rpc-auth` flag now. [GH-148] BUG FIXES: * Fixed config folder in Upstart script. Thanks to @llchen223. [GH-174] * Event handlers are correctly invoked when BusyBox is the shell. [GH-156] * Event handlers were not being invoked with the correct SERF_TAG_* values if tags were changed using the `tags` command. [GH-169] MISC: * Support for protocol version 1 (Serf 0.2) has been removed. Serf 0.5 cannot join a cluster that has members running version 0.2. ## 0.4.5 (February 25, 2014) FEATURES: * New `tags` command is available to dynamically update tags without reloading the agent. Thanks to @ryanuber. [GH-126] IMPROVEMENTS: * Upstart receipe logs output thanks to @breerly [GH-128] * `members` can filter on any tag thanks to @hmrm [GH-124] * Added vagrant demo to make a simple cluster * `members` now columnizes the output thanks to @ryanuber [GH-138] * Agent passes its own environment variables through thanks to @mcroydon [GH-142] * `-iface` flag can be used to bind to interfaces [GH-145] BUG FIXES: * -config-dir would cause protocol to be set to 0 if there are no configuration files in the directory [GH-129] * Event handlers can filter on 'member-update' * User event handler appends new line, this was being omitted ## 0.4.1 (February 3, 2014) IMPROVEMENTS: * mDNS service uses the advertise address instead of bind address ## 0.4.0 (January 31, 2014) FEATURES: * Static `role` has been replaced with dynamic tags. Each agent can have multiple key/value tags associated using `-tag`. Tags can be updated using a SIGHUP and are advertised to the cluster, causing the `member-update` event to be triggered. [GH-111] [GH-98] * Serf can automatically discover peers uing mDNS when provided the `-discover` flag. In network environments supporting multicast, no explicit join is needed to find peers. [GH-53] * Serf collects telemetry information and simple runtime profiling. Stats can be dumped to stderr by sending a `USR1` signal to Serf. Windows users must use the `BREAK` signal instead. [GH-103] * `advertise` flag can be used to set an advertise address different from the bind address. Used for NAT traversal. Thanks to @benagricola [GH-93] * `members` command now takes `-format` flag to specify either text or JSON output. Fixed by @ryanuber [GH-97] IMPROVEMENTS: * User payload always appends a newline when invoking a shell script * Severity of "Potential blocking operation" reduced to debug to prevent spurious messages on slow or busy machines. BUG FIXES: * If an agent is restarted with the same bind address but new name, it will not respond to the old name, causing the old name to enter the `failed` state, instead of having duplicate entries in the `alive` state. * `leave_on_interrupt` set to false when not specified, if any config file is provided. This flag is deprecated for `skip_leave_on_interrupt` instead. [GH-94] MISC: * `-role` configuration has been deprecated in favor of `-tag role=foo`. The flag is still supported but will generate warnings. * Support for protocol version 0 (Serf 0.1) has been removed. Serf 0.4 cannot join a cluster that has members running version 0.1. ## 0.3.0 (December 5, 2013) FEATURES: * Dynamic port support, cluster wide consistent config not necessary * Snapshots to automaticaly rejoin cluster after failure and prevent replays [GH-84] [GH-71] * Adding `profile` config to agent, to support WAN, LAN, and Local modes * MsgPack over TCP RPC protocol which can be used to control Serf, send events, and receive events with low latency. * New `leave` CLI command and RPC endpoint to control graceful leaves * Signal handling is controlable, graceful leave behavior on SIGINT/SIGTERM can be specified * SIGHUP can be used to reload configuration IMPROVEMENTS: * Event handler provides lamport time of user events via SERF_USER_LTIME [GH-68] * Memberlist encryption overhead has been reduced * Filter output of `members` using regular expressions on role and status * `replay_on_join` parameter to control replay with `start_join` * `monitor` works even if the client is behind a NAT * Serf generates warning if binding to public IP without encryption BUG FIXES: * Prevent unbounded transmit queues [GH-78] * IPv6 addresses can be bound to [GH-72] * Serf join won't hang on a slow/dead node [GH-70] * Serf Leave won't block Shutdown [GH-1] ## 0.2.1 (November 6, 2013) BUG FIXES: * Member role and address not updated on re-join [GH-58] ## 0.2.0 (November 1, 2013) FEATURES: * Protocol versioning features so that upgrades can be done safely. See the website on upgrading Serf for more info. * Can now configure Serf with files or directories of files by specifying the `-config-file` and/or `-config-dir` flags to the agent. * New command `serf force-leave` can be used to force a "failed" node to the "left" state. * Serf now supports message encryption and verification so that it can be used on untrusted networks [GH-25] * The `-join` flag on `serf agent` can be used to join a cluster when starting an agent. [GH-42] IMPROVEMENTS: * Random staggering of periodic routines to avoid cluster-wide synchronization * Push/Pull timer automatically slows down as cluster grows to avoid congestion * Messages are compressed to reduce bandwidth utilization * `serf members` now provides node roles in output * Joining a cluster will no longer replay all the old events by default, but it can using the `-replay` flag. * User events are coalesced by default, meaning duplicate events (by name) within a short period of time are merged. [GH-8] BUG FIXES: * Event handlers work on Windows now by executing commands through `cmd /C` [GH-37] * Nodes that previously left and rejoin won't get stuck in 'leaving' state. [GH-18] * Fixing alignment issues on i386 for atomic operations [GH-20] * "trace" log level works [GH-31] ## 0.1.1 (October 23, 2013) BUG FIXES: * Default node name is outputted when "serf agent" is called with no args. * Remove node from reap list after join so a fast re-join doesn't lose the member. ## 0.1.0 (October 23, 2013) * Initial release serf-0.6.4/LICENSE000066400000000000000000000371511246721563000135020ustar00rootroot00000000000000Mozilla Public License, version 2.0 1. Definitions 1.1. “Contributor” means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. 1.2. “Contributor Version” means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor’s Contribution. 1.3. “Contribution” means Covered Software of a particular Contributor. 1.4. “Covered Software” means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. 1.5. “Incompatible With Secondary Licenses” means a. that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or b. that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. 1.6. “Executable Form” means any form of the work other than Source Code Form. 1.7. “Larger Work” means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. 1.8. “License” means this document. 1.9. “Licensable” means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. 1.10. “Modifications” means any of the following: a. any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or b. any new file in Source Code Form that contains any Covered Software. 1.11. “Patent Claims” of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. 1.12. “Secondary License” means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. 1.13. “Source Code Form” means the form of the work preferred for making modifications. 1.14. “You” (or “Your”) means an individual or a legal entity exercising rights under this License. For legal entities, “You” includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, “control” means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. 2. License Grants and Conditions 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: a. under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and b. under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: a. for any code that a Contributor has removed from Covered Software; or b. for infringements caused by: (i) Your and any other third party’s modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or c. under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. 3. Responsibilities 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients’ rights in the Source Code Form. 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: a. such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and b. You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients’ rights in the Source Code Form under this License. 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. 4. Inability to Comply Due to Statute or Regulation If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. 5. Termination 5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. 5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. 6. Disclaimer of Warranty Covered Software is provided under this License on an “as is” basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer. 7. Limitation of Liability Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party’s negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You. 8. Litigation Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party’s ability to bring cross-claims or counter-claims. 9. Miscellaneous This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. 10. Versions of the License 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. Exhibit A - Source Code Form License Notice This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. Exhibit B - “Incompatible With Secondary Licenses” Notice This Source Code Form is “Incompatible With Secondary Licenses”, as defined by the Mozilla Public License, v. 2.0. serf-0.6.4/Makefile000066400000000000000000000010561246721563000141300ustar00rootroot00000000000000DEPS = $(go list -f '{{range .TestImports}}{{.}} {{end}}' ./...) all: deps @mkdir -p bin/ @bash --norc -i ./scripts/build.sh cov: gocov test ./... | gocov-html > /tmp/coverage.html open /tmp/coverage.html deps: go get -d -v ./... echo $(DEPS) | xargs -n1 go get -d test: deps subnet go list ./... | xargs -n1 go test integ: subnet go list ./... | INTEG_TESTS=yes xargs -n1 go test subnet: ./scripts/setup_test_subnet.sh web: ./scripts/website_run.sh web-push: ./scripts/website_push.sh .PHONY: all cov deps integ subnet test web web-push serf-0.6.4/README.md000066400000000000000000000077371246721563000137630ustar00rootroot00000000000000# Serf * Website: http://www.serfdom.io * IRC: `#serfdom` on Freenode * Mailing list: [Google Groups](https://groups.google.com/group/serfdom/) Serf is a decentralized solution for service discovery and orchestration that is lightweight, highly available, and fault tolerant. Serf runs on Linux, Mac OS X, and Windows. An efficient and lightweight gossip protocol is used to communicate with other nodes. Serf can detect node failures and notify the rest of the cluster. An event system is built on top of Serf, letting you use Serf's gossip protocol to propagate events such as deploys, configuration changes, etc. Serf is completely masterless with no single point of failure. Here are some example use cases of Serf, though there are many others: * Discovering web servers and automatically adding them to a load balancer * Organizing many memcached or redis nodes into a cluster, perhaps with something like [twemproxy](https://github.com/twitter/twemproxy) or maybe just configuring an application with the address of all the nodes * Triggering web deploys using the event system built on top of Serf * Propagating changes to configuration to relevant nodes. * Updating DNS records to reflect cluster changes as they occur. * Much, much more. ## Quick Start First, [download a pre-built Serf binary](http://www.serfdom.io/downloads.html) for your operating system or [compile Serf yourself](#developing-serf). Next, let's start a couple Serf agents. Agents run until they're told to quit and handle the communication of maintenance tasks of Serf. In a real Serf setup, each node in your system will run one or more Serf agents (it can run multiple agents if you're running multiple cluster types. e.g. web servers vs. memcached servers). Start each Serf agent in a separate terminal session so that we can see the output of each. Start the first agent: ``` $ serf agent -node=foo -bind=127.0.0.1:5000 -rpc-addr=127.0.0.1:7373 ... ``` Start the second agent in another terminal session (while the first is still running): ``` $ serf agent -node=bar -bind=127.0.0.1:5001 -rpc-addr=127.0.0.1:7374 ... ``` At this point two Serf agents are running independently but are still unaware of each other. Let's now tell the first agent to join an existing cluster (the second agent). When starting a Serf agent, you must join an existing cluster by specifying at least one existing member. After this, Serf gossips and the remainder of the cluster becomes aware of the join. Run the following commands in a third terminal session. ``` $ serf join 127.0.0.1:5001 ... ``` If you're watching your terminals, you should see both Serf agents become aware of the join. You can prove it by running `serf members` to see the members of the Serf cluster: ``` $ serf members foo 127.0.0.1:5000 alive bar 127.0.0.1:5001 alive ... ``` At this point, you can ctrl-C or force kill either Serf agent, and they'll update their membership lists appropriately. If you ctrl-C a Serf agent, it will gracefully leave by notifying the cluster of its intent to leave. If you force kill an agent, it will eventually (usually within seconds) be detected by another member of the cluster which will notify the cluster of the node failure. ## Documentation Full, comprehensive documentation is viewable on the Serf website: http://www.serfdom.io/docs ## Developing Serf If you wish to work on Serf itself, you'll first need [Go](http://golang.org) installed (version 1.2+ is _required_). Make sure you have Go properly installed, including setting up your [GOPATH](http://golang.org/doc/code.html#GOPATH). Next, clone this repository into `$GOPATH/src/github.com/hashicorp/serf` and then just type `make`. In a few moments, you'll have a working `serf` executable: ``` $ make ... $ bin/serf ... ``` *note: `make` will also place a copy of the binary in the first part of your $GOPATH* You can run tests by typing `make test`. If you make any changes to the code, run `make format` in order to automatically format the code according to Go standards. serf-0.6.4/client/000077500000000000000000000000001246721563000137445ustar00rootroot00000000000000serf-0.6.4/client/README.md000066400000000000000000000005431246721563000152250ustar00rootroot00000000000000# Serf Client This repo provide the `client` package, which is used to interact with a Serf agent using the msgpack RPC system it supports. This is the official reference implementation, and is used inside the Serf CLI to support the various commands. Full documentation can be found on [godoc here](http://godoc.org/github.com/hashicorp/serf/client). serf-0.6.4/client/const.go000066400000000000000000000073051246721563000154260ustar00rootroot00000000000000package client import ( "github.com/hashicorp/serf/serf" "net" "time" ) const ( maxIPCVersion = 1 ) const ( handshakeCommand = "handshake" eventCommand = "event" forceLeaveCommand = "force-leave" joinCommand = "join" membersCommand = "members" membersFilteredCommand = "members-filtered" streamCommand = "stream" stopCommand = "stop" monitorCommand = "monitor" leaveCommand = "leave" installKeyCommand = "install-key" useKeyCommand = "use-key" removeKeyCommand = "remove-key" listKeysCommand = "list-keys" tagsCommand = "tags" queryCommand = "query" respondCommand = "respond" authCommand = "auth" statsCommand = "stats" ) const ( unsupportedCommand = "Unsupported command" unsupportedIPCVersion = "Unsupported IPC version" duplicateHandshake = "Handshake already performed" handshakeRequired = "Handshake required" monitorExists = "Monitor already exists" invalidFilter = "Invalid event filter" streamExists = "Stream with given sequence exists" invalidQueryID = "No pending queries matching ID" authRequired = "Authentication required" invalidAuthToken = "Invalid authentication token" ) const ( queryRecordAck = "ack" queryRecordResponse = "response" queryRecordDone = "done" ) // Request header is sent before each request type requestHeader struct { Command string Seq uint64 } // Response header is sent before each response type responseHeader struct { Seq uint64 Error string } type handshakeRequest struct { Version int32 } type authRequest struct { AuthKey string } type eventRequest struct { Name string Payload []byte Coalesce bool } type forceLeaveRequest struct { Node string } type joinRequest struct { Existing []string Replay bool } type joinResponse struct { Num int32 } type membersFilteredRequest struct { Tags map[string]string Status string Name string } type membersResponse struct { Members []Member } type keyRequest struct { Key string } type keyResponse struct { Messages map[string]string Keys map[string]int NumNodes int NumErr int NumResp int } type monitorRequest struct { LogLevel string } type streamRequest struct { Type string } type stopRequest struct { Stop uint64 } type tagsRequest struct { Tags map[string]string DeleteTags []string } type queryRequest struct { FilterNodes []string FilterTags map[string]string RequestAck bool Timeout time.Duration Name string Payload []byte } type respondRequest struct { ID uint64 Payload []byte } type queryRecord struct { Type string From string Payload []byte } // NodeResponse is used to return the response of a query type NodeResponse struct { From string Payload []byte } type logRecord struct { Log string } type userEventRecord struct { Event string LTime serf.LamportTime Name string Payload []byte Coalesce bool } // Member is used to represent a single member of the // Serf cluster type Member struct { Name string // Node name Addr net.IP // Address of the Serf node Port uint16 // Gossip port used by Serf Tags map[string]string Status string ProtocolMin uint8 // Minimum supported Memberlist protocol ProtocolMax uint8 // Maximum supported Memberlist protocol ProtocolCur uint8 // Currently set Memberlist protocol DelegateMin uint8 // Minimum supported Serf protocol DelegateMax uint8 // Maximum supported Serf protocol DelegateCur uint8 // Currently set Serf protocol } type memberEventRecord struct { Event string Members []Member } serf-0.6.4/client/rpc_client.go000066400000000000000000000441011246721563000164150ustar00rootroot00000000000000package client import ( "bufio" "fmt" "github.com/hashicorp/go-msgpack/codec" "github.com/hashicorp/logutils" "log" "net" "sync" "sync/atomic" "time" ) const ( // This is the default IO timeout for the client DefaultTimeout = 10 * time.Second ) var ( clientClosed = fmt.Errorf("client closed") ) type seqCallback struct { handler func(*responseHeader) } func (sc *seqCallback) Handle(resp *responseHeader) { sc.handler(resp) } func (sc *seqCallback) Cleanup() {} // seqHandler interface is used to handle responses type seqHandler interface { Handle(*responseHeader) Cleanup() } // Config is provided to ClientFromConfig to make // a new RPCClient from the given configuration type Config struct { // Addr must be the RPC address to contact Addr string // If provided, the client will perform key based auth AuthKey string // If provided, overrides the DefaultTimeout used for // IO deadlines Timeout time.Duration } // RPCClient is used to make requests to the Agent using an RPC mechanism. // Additionally, the client manages event streams and monitors, enabling a client // to easily receive event notifications instead of using the fork/exec mechanism. type RPCClient struct { seq uint64 timeout time.Duration conn *net.TCPConn reader *bufio.Reader writer *bufio.Writer dec *codec.Decoder enc *codec.Encoder writeLock sync.Mutex dispatch map[uint64]seqHandler dispatchLock sync.Mutex shutdown bool shutdownCh chan struct{} shutdownLock sync.Mutex } // send is used to send an object using the MsgPack encoding. send // is serialized to prevent write overlaps, while properly buffering. func (c *RPCClient) send(header *requestHeader, obj interface{}) error { c.writeLock.Lock() defer c.writeLock.Unlock() if c.shutdown { return clientClosed } // Setup an IO deadline, this way we won't wait indefinitely // if the client has hung. if err := c.conn.SetWriteDeadline(time.Now().Add(c.timeout)); err != nil { return err } if err := c.enc.Encode(header); err != nil { return err } if obj != nil { if err := c.enc.Encode(obj); err != nil { return err } } if err := c.writer.Flush(); err != nil { return err } return nil } // NewRPCClient is used to create a new RPC client given the // RPC address of the Serf agent. This will return a client, // or an error if the connection could not be established. // This will use the DefaultTimeout for the client. func NewRPCClient(addr string) (*RPCClient, error) { conf := Config{Addr: addr} return ClientFromConfig(&conf) } // ClientFromConfig is used to create a new RPC client given the // configuration object. This will return a client, or an error if // the connection could not be established. func ClientFromConfig(c *Config) (*RPCClient, error) { // Setup the defaults if c.Timeout == 0 { c.Timeout = DefaultTimeout } // Try to dial to serf conn, err := net.DialTimeout("tcp", c.Addr, c.Timeout) if err != nil { return nil, err } // Create the client client := &RPCClient{ seq: 0, timeout: c.Timeout, conn: conn.(*net.TCPConn), reader: bufio.NewReader(conn), writer: bufio.NewWriter(conn), dispatch: make(map[uint64]seqHandler), shutdownCh: make(chan struct{}), } client.dec = codec.NewDecoder(client.reader, &codec.MsgpackHandle{RawToString: true, WriteExt: true}) client.enc = codec.NewEncoder(client.writer, &codec.MsgpackHandle{RawToString: true, WriteExt: true}) go client.listen() // Do the initial handshake if err := client.handshake(); err != nil { client.Close() return nil, err } // Do the initial authentication if needed if c.AuthKey != "" { if err := client.auth(c.AuthKey); err != nil { client.Close() return nil, err } } return client, err } // StreamHandle is an opaque handle passed to stop to stop streaming type StreamHandle uint64 func (c *RPCClient) IsClosed() bool { return c.shutdown } // Close is used to free any resources associated with the client func (c *RPCClient) Close() error { c.shutdownLock.Lock() defer c.shutdownLock.Unlock() if !c.shutdown { c.shutdown = true close(c.shutdownCh) c.deregisterAll() return c.conn.Close() } return nil } // ForceLeave is used to ask the agent to issue a leave command for // a given node func (c *RPCClient) ForceLeave(node string) error { header := requestHeader{ Command: forceLeaveCommand, Seq: c.getSeq(), } req := forceLeaveRequest{ Node: node, } return c.genericRPC(&header, &req, nil) } // Join is used to instruct the agent to attempt a join func (c *RPCClient) Join(addrs []string, replay bool) (int, error) { header := requestHeader{ Command: joinCommand, Seq: c.getSeq(), } req := joinRequest{ Existing: addrs, Replay: replay, } var resp joinResponse err := c.genericRPC(&header, &req, &resp) return int(resp.Num), err } // Members is used to fetch a list of known members func (c *RPCClient) Members() ([]Member, error) { header := requestHeader{ Command: membersCommand, Seq: c.getSeq(), } var resp membersResponse err := c.genericRPC(&header, nil, &resp) return resp.Members, err } // MembersFiltered returns a subset of members func (c *RPCClient) MembersFiltered(tags map[string]string, status string, name string) ([]Member, error) { header := requestHeader{ Command: membersFilteredCommand, Seq: c.getSeq(), } req := membersFilteredRequest{ Tags: tags, Status: status, Name: name, } var resp membersResponse err := c.genericRPC(&header, &req, &resp) return resp.Members, err } // UserEvent is used to trigger sending an event func (c *RPCClient) UserEvent(name string, payload []byte, coalesce bool) error { header := requestHeader{ Command: eventCommand, Seq: c.getSeq(), } req := eventRequest{ Name: name, Payload: payload, Coalesce: coalesce, } return c.genericRPC(&header, &req, nil) } // Leave is used to trigger a graceful leave and shutdown of the agent func (c *RPCClient) Leave() error { header := requestHeader{ Command: leaveCommand, Seq: c.getSeq(), } return c.genericRPC(&header, nil, nil) } // UpdateTags will modify the tags on a running serf agent func (c *RPCClient) UpdateTags(tags map[string]string, delTags []string) error { header := requestHeader{ Command: tagsCommand, Seq: c.getSeq(), } req := tagsRequest{ Tags: tags, DeleteTags: delTags, } return c.genericRPC(&header, &req, nil) } // Respond allows a client to respond to a query event. The ID is the // ID of the Query to respond to, and the given payload is the response. func (c *RPCClient) Respond(id uint64, buf []byte) error { header := requestHeader{ Command: respondCommand, Seq: c.getSeq(), } req := respondRequest{ ID: id, Payload: buf, } return c.genericRPC(&header, &req, nil) } // IntallKey installs a new encryption key onto the keyring func (c *RPCClient) InstallKey(key string) (map[string]string, error) { header := requestHeader{ Command: installKeyCommand, Seq: c.getSeq(), } req := keyRequest{ Key: key, } resp := keyResponse{} err := c.genericRPC(&header, &req, &resp) return resp.Messages, err } // UseKey changes the primary encryption key on the keyring func (c *RPCClient) UseKey(key string) (map[string]string, error) { header := requestHeader{ Command: useKeyCommand, Seq: c.getSeq(), } req := keyRequest{ Key: key, } resp := keyResponse{} err := c.genericRPC(&header, &req, &resp) return resp.Messages, err } // RemoveKey changes the primary encryption key on the keyring func (c *RPCClient) RemoveKey(key string) (map[string]string, error) { header := requestHeader{ Command: removeKeyCommand, Seq: c.getSeq(), } req := keyRequest{ Key: key, } resp := keyResponse{} err := c.genericRPC(&header, &req, &resp) return resp.Messages, err } // ListKeys returns all of the active keys on each member of the cluster func (c *RPCClient) ListKeys() (map[string]int, int, map[string]string, error) { header := requestHeader{ Command: listKeysCommand, Seq: c.getSeq(), } resp := keyResponse{} err := c.genericRPC(&header, nil, &resp) return resp.Keys, resp.NumNodes, resp.Messages, err } // Stats is used to get debugging state information func (c *RPCClient) Stats() (map[string]map[string]string, error) { header := requestHeader{ Command: statsCommand, Seq: c.getSeq(), } var resp map[string]map[string]string err := c.genericRPC(&header, nil, &resp) return resp, err } type monitorHandler struct { client *RPCClient closed bool init bool initCh chan<- error logCh chan<- string seq uint64 } func (mh *monitorHandler) Handle(resp *responseHeader) { // Initialize on the first response if !mh.init { mh.init = true mh.initCh <- strToError(resp.Error) return } // Decode logs for all other responses var rec logRecord if err := mh.client.dec.Decode(&rec); err != nil { log.Printf("[ERR] Failed to decode log: %v", err) mh.client.deregisterHandler(mh.seq) return } select { case mh.logCh <- rec.Log: default: log.Printf("[ERR] Dropping log! Monitor channel full") } } func (mh *monitorHandler) Cleanup() { if !mh.closed { if !mh.init { mh.init = true mh.initCh <- fmt.Errorf("Stream closed") } if mh.logCh != nil { close(mh.logCh) } mh.closed = true } } // Monitor is used to subscribe to the logs of the agent func (c *RPCClient) Monitor(level logutils.LogLevel, ch chan<- string) (StreamHandle, error) { // Setup the request seq := c.getSeq() header := requestHeader{ Command: monitorCommand, Seq: seq, } req := monitorRequest{ LogLevel: string(level), } // Create a monitor handler initCh := make(chan error, 1) handler := &monitorHandler{ client: c, initCh: initCh, logCh: ch, seq: seq, } c.handleSeq(seq, handler) // Send the request if err := c.send(&header, &req); err != nil { c.deregisterHandler(seq) return 0, err } // Wait for a response select { case err := <-initCh: return StreamHandle(seq), err case <-c.shutdownCh: c.deregisterHandler(seq) return 0, clientClosed } } type streamHandler struct { client *RPCClient closed bool init bool initCh chan<- error eventCh chan<- map[string]interface{} seq uint64 } func (sh *streamHandler) Handle(resp *responseHeader) { // Initialize on the first response if !sh.init { sh.init = true sh.initCh <- strToError(resp.Error) return } // Decode logs for all other responses var rec map[string]interface{} if err := sh.client.dec.Decode(&rec); err != nil { log.Printf("[ERR] Failed to decode stream record: %v", err) sh.client.deregisterHandler(sh.seq) return } select { case sh.eventCh <- rec: default: log.Printf("[ERR] Dropping event! Stream channel full") } } func (sh *streamHandler) Cleanup() { if !sh.closed { if !sh.init { sh.init = true sh.initCh <- fmt.Errorf("Stream closed") } if sh.eventCh != nil { close(sh.eventCh) } sh.closed = true } } // Stream is used to subscribe to events func (c *RPCClient) Stream(filter string, ch chan<- map[string]interface{}) (StreamHandle, error) { // Setup the request seq := c.getSeq() header := requestHeader{ Command: streamCommand, Seq: seq, } req := streamRequest{ Type: filter, } // Create a monitor handler initCh := make(chan error, 1) handler := &streamHandler{ client: c, initCh: initCh, eventCh: ch, seq: seq, } c.handleSeq(seq, handler) // Send the request if err := c.send(&header, &req); err != nil { c.deregisterHandler(seq) return 0, err } // Wait for a response select { case err := <-initCh: return StreamHandle(seq), err case <-c.shutdownCh: c.deregisterHandler(seq) return 0, clientClosed } } type queryHandler struct { client *RPCClient closed bool init bool initCh chan<- error ackCh chan<- string respCh chan<- NodeResponse seq uint64 } func (qh *queryHandler) Handle(resp *responseHeader) { // Initialize on the first response if !qh.init { qh.init = true qh.initCh <- strToError(resp.Error) return } // Decode the query response var rec queryRecord if err := qh.client.dec.Decode(&rec); err != nil { log.Printf("[ERR] Failed to decode query response: %v", err) qh.client.deregisterHandler(qh.seq) return } switch rec.Type { case queryRecordAck: select { case qh.ackCh <- rec.From: default: log.Printf("[ERR] Dropping query ack, channel full") } case queryRecordResponse: select { case qh.respCh <- NodeResponse{rec.From, rec.Payload}: default: log.Printf("[ERR] Dropping query response, channel full") } case queryRecordDone: // No further records coming qh.client.deregisterHandler(qh.seq) default: log.Printf("[ERR] Unrecognized query record type: %s", rec.Type) } } func (qh *queryHandler) Cleanup() { if !qh.closed { if !qh.init { qh.init = true qh.initCh <- fmt.Errorf("Stream closed") } if qh.ackCh != nil { close(qh.ackCh) } if qh.respCh != nil { close(qh.respCh) } qh.closed = true } } // QueryParam is provided to query set various settings. type QueryParam struct { FilterNodes []string // A list of node names to restrict query to FilterTags map[string]string // A map of tag name to regex to filter on RequestAck bool // Should nodes ack the query receipt Timeout time.Duration // Maximum query duration. Optional, will be set automatically. Name string // Opaque query name Payload []byte // Opaque query payload AckCh chan<- string // Channel to send Ack replies on RespCh chan<- NodeResponse // Channel to send responses on } // Query initiates a new query message using the given parameters, and streams // acks and responses over the given channels. The channels will not block on // sends and should be buffered. At the end of the query, the channels will be // closed. func (c *RPCClient) Query(params *QueryParam) error { // Setup the request seq := c.getSeq() header := requestHeader{ Command: queryCommand, Seq: seq, } req := queryRequest{ FilterNodes: params.FilterNodes, FilterTags: params.FilterTags, RequestAck: params.RequestAck, Timeout: params.Timeout, Name: params.Name, Payload: params.Payload, } // Create a query handler initCh := make(chan error, 1) handler := &queryHandler{ client: c, initCh: initCh, ackCh: params.AckCh, respCh: params.RespCh, seq: seq, } c.handleSeq(seq, handler) // Send the request if err := c.send(&header, &req); err != nil { c.deregisterHandler(seq) return err } // Wait for a response select { case err := <-initCh: return err case <-c.shutdownCh: c.deregisterHandler(seq) return clientClosed } } // Stop is used to unsubscribe from logs or event streams func (c *RPCClient) Stop(handle StreamHandle) error { // Deregister locally first to stop delivery c.deregisterHandler(uint64(handle)) header := requestHeader{ Command: stopCommand, Seq: c.getSeq(), } req := stopRequest{ Stop: uint64(handle), } return c.genericRPC(&header, &req, nil) } // handshake is used to perform the initial handshake on connect func (c *RPCClient) handshake() error { header := requestHeader{ Command: handshakeCommand, Seq: c.getSeq(), } req := handshakeRequest{ Version: maxIPCVersion, } return c.genericRPC(&header, &req, nil) } // auth is used to perform the initial authentication on connect func (c *RPCClient) auth(authKey string) error { header := requestHeader{ Command: authCommand, Seq: c.getSeq(), } req := authRequest{ AuthKey: authKey, } return c.genericRPC(&header, &req, nil) } // genericRPC is used to send a request and wait for an // errorSequenceResponse, potentially returning an error func (c *RPCClient) genericRPC(header *requestHeader, req interface{}, resp interface{}) error { // Setup a response handler errCh := make(chan error, 1) handler := func(respHeader *responseHeader) { // If we get an auth error, we should not wait for a request body if respHeader.Error == authRequired { goto SEND_ERR } if resp != nil { err := c.dec.Decode(resp) if err != nil { errCh <- err return } } SEND_ERR: errCh <- strToError(respHeader.Error) } c.handleSeq(header.Seq, &seqCallback{handler: handler}) defer c.deregisterHandler(header.Seq) // Send the request if err := c.send(header, req); err != nil { return err } // Wait for a response select { case err := <-errCh: return err case <-c.shutdownCh: return clientClosed } } // strToError converts a string to an error if not blank func strToError(s string) error { if s != "" { return fmt.Errorf(s) } return nil } // getSeq returns the next sequence number in a safe manner func (c *RPCClient) getSeq() uint64 { return atomic.AddUint64(&c.seq, 1) } // deregisterAll is used to deregister all handlers func (c *RPCClient) deregisterAll() { c.dispatchLock.Lock() defer c.dispatchLock.Unlock() for _, seqH := range c.dispatch { seqH.Cleanup() } c.dispatch = make(map[uint64]seqHandler) } // deregisterHandler is used to deregister a handler func (c *RPCClient) deregisterHandler(seq uint64) { c.dispatchLock.Lock() seqH, ok := c.dispatch[seq] delete(c.dispatch, seq) c.dispatchLock.Unlock() if ok { seqH.Cleanup() } } // handleSeq is used to setup a handlerto wait on a response for // a given sequence number. func (c *RPCClient) handleSeq(seq uint64, handler seqHandler) { c.dispatchLock.Lock() defer c.dispatchLock.Unlock() c.dispatch[seq] = handler } // respondSeq is used to respond to a given sequence number func (c *RPCClient) respondSeq(seq uint64, respHeader *responseHeader) { c.dispatchLock.Lock() seqL, ok := c.dispatch[seq] c.dispatchLock.Unlock() // Get a registered listener, ignore if none if ok { seqL.Handle(respHeader) } } // listen is used to processes data coming over the IPC channel, // and wrote it to the correct destination based on seq no func (c *RPCClient) listen() { defer c.Close() var respHeader responseHeader for { if err := c.dec.Decode(&respHeader); err != nil { if !c.shutdown { log.Printf("[ERR] agent.client: Failed to decode response header: %v", err) } break } c.respondSeq(respHeader.Seq, &respHeader) } } serf-0.6.4/command/000077500000000000000000000000001246721563000141045ustar00rootroot00000000000000serf-0.6.4/command/agent/000077500000000000000000000000001246721563000152025ustar00rootroot00000000000000serf-0.6.4/command/agent/agent.go000066400000000000000000000275021246721563000166350ustar00rootroot00000000000000package agent import ( "encoding/base64" "encoding/json" "fmt" "github.com/hashicorp/memberlist" "github.com/hashicorp/serf/serf" "io" "io/ioutil" "log" "os" "strings" "sync" ) // Agent starts and manages a Serf instance, adding some niceties // on top of Serf such as storing logs that you can later retrieve, // and invoking EventHandlers when events occur. type Agent struct { // Stores the serf configuration conf *serf.Config // Stores the agent configuration agentConf *Config // eventCh is used for Serf to deliver events on eventCh chan serf.Event // eventHandlers is the registered handlers for events eventHandlers map[EventHandler]struct{} eventHandlerList []EventHandler eventHandlersLock sync.Mutex // logger instance wraps the logOutput logger *log.Logger // This is the underlying Serf we are wrapping serf *serf.Serf // shutdownCh is used for shutdowns shutdown bool shutdownCh chan struct{} shutdownLock sync.Mutex } // Start creates a new agent, potentially returning an error func Create(agentConf *Config, conf *serf.Config, logOutput io.Writer) (*Agent, error) { // Ensure we have a log sink if logOutput == nil { logOutput = os.Stderr } // Setup the underlying loggers conf.MemberlistConfig.LogOutput = logOutput conf.LogOutput = logOutput // Create a channel to listen for events from Serf eventCh := make(chan serf.Event, 64) conf.EventCh = eventCh // Setup the agent agent := &Agent{ conf: conf, agentConf: agentConf, eventCh: eventCh, eventHandlers: make(map[EventHandler]struct{}), logger: log.New(logOutput, "", log.LstdFlags), shutdownCh: make(chan struct{}), } // Restore agent tags from a tags file if agentConf.TagsFile != "" { if err := agent.loadTagsFile(agentConf.TagsFile); err != nil { return nil, err } } // Load in a keyring file if provided if agentConf.KeyringFile != "" { if err := agent.loadKeyringFile(agentConf.KeyringFile); err != nil { return nil, err } } return agent, nil } // Start is used to initiate the event listeners. It is separate from // create so that there isn't a race condition between creating the // agent and registering handlers func (a *Agent) Start() error { a.logger.Printf("[INFO] agent: Serf agent starting") // Create serf first serf, err := serf.Create(a.conf) if err != nil { return fmt.Errorf("Error creating Serf: %s", err) } a.serf = serf // Start event loop go a.eventLoop() return nil } // Leave prepares for a graceful shutdown of the agent and its processes func (a *Agent) Leave() error { if a.serf == nil { return nil } a.logger.Println("[INFO] agent: requesting graceful leave from Serf") return a.serf.Leave() } // Shutdown closes this agent and all of its processes. Should be preceded // by a Leave for a graceful shutdown. func (a *Agent) Shutdown() error { a.shutdownLock.Lock() defer a.shutdownLock.Unlock() if a.shutdown { return nil } if a.serf == nil { goto EXIT } a.logger.Println("[INFO] agent: requesting serf shutdown") if err := a.serf.Shutdown(); err != nil { return err } EXIT: a.logger.Println("[INFO] agent: shutdown complete") a.shutdown = true close(a.shutdownCh) return nil } // ShutdownCh returns a channel that can be selected to wait // for the agent to perform a shutdown. func (a *Agent) ShutdownCh() <-chan struct{} { return a.shutdownCh } // Returns the Serf agent of the running Agent. func (a *Agent) Serf() *serf.Serf { return a.serf } // Returns the Serf config of the running Agent. func (a *Agent) SerfConfig() *serf.Config { return a.conf } // Join asks the Serf instance to join. See the Serf.Join function. func (a *Agent) Join(addrs []string, replay bool) (n int, err error) { a.logger.Printf("[INFO] agent: joining: %v replay: %v", addrs, replay) ignoreOld := !replay n, err = a.serf.Join(addrs, ignoreOld) if n > 0 { a.logger.Printf("[INFO] agent: joined: %d nodes", n) } if err != nil { a.logger.Printf("[WARN] agent: error joining: %v", err) } return } // ForceLeave is used to eject a failed node from the cluster func (a *Agent) ForceLeave(node string) error { a.logger.Printf("[INFO] agent: Force leaving node: %s", node) err := a.serf.RemoveFailedNode(node) if err != nil { a.logger.Printf("[WARN] agent: failed to remove node: %v", err) } return err } // UserEvent sends a UserEvent on Serf, see Serf.UserEvent. func (a *Agent) UserEvent(name string, payload []byte, coalesce bool) error { a.logger.Printf("[DEBUG] agent: Requesting user event send: %s. Coalesced: %#v. Payload: %#v", name, coalesce, string(payload)) err := a.serf.UserEvent(name, payload, coalesce) if err != nil { a.logger.Printf("[WARN] agent: failed to send user event: %v", err) } return err } // Query sends a Query on Serf, see Serf.Query. func (a *Agent) Query(name string, payload []byte, params *serf.QueryParam) (*serf.QueryResponse, error) { // Prevent the use of the internal prefix if strings.HasPrefix(name, serf.InternalQueryPrefix) { // Allow the special "ping" query if name != serf.InternalQueryPrefix+"ping" || payload != nil { return nil, fmt.Errorf("Queries cannot contain the '%s' prefix", serf.InternalQueryPrefix) } } a.logger.Printf("[DEBUG] agent: Requesting query send: %s. Payload: %#v", name, string(payload)) resp, err := a.serf.Query(name, payload, params) if err != nil { a.logger.Printf("[WARN] agent: failed to start user query: %v", err) } return resp, err } // RegisterEventHandler adds an event handler to receive event notifications func (a *Agent) RegisterEventHandler(eh EventHandler) { a.eventHandlersLock.Lock() defer a.eventHandlersLock.Unlock() a.eventHandlers[eh] = struct{}{} a.eventHandlerList = nil for eh := range a.eventHandlers { a.eventHandlerList = append(a.eventHandlerList, eh) } } // DeregisterEventHandler removes an EventHandler and prevents more invocations func (a *Agent) DeregisterEventHandler(eh EventHandler) { a.eventHandlersLock.Lock() defer a.eventHandlersLock.Unlock() delete(a.eventHandlers, eh) a.eventHandlerList = nil for eh := range a.eventHandlers { a.eventHandlerList = append(a.eventHandlerList, eh) } } // eventLoop listens to events from Serf and fans out to event handlers func (a *Agent) eventLoop() { serfShutdownCh := a.serf.ShutdownCh() for { select { case e := <-a.eventCh: a.logger.Printf("[INFO] agent: Received event: %s", e.String()) a.eventHandlersLock.Lock() handlers := a.eventHandlerList a.eventHandlersLock.Unlock() for _, eh := range handlers { eh.HandleEvent(e) } case <-serfShutdownCh: a.logger.Printf("[WARN] agent: Serf shutdown detected, quitting") a.Shutdown() return case <-a.shutdownCh: return } } } // InstallKey initiates a query to install a new key on all members func (a *Agent) InstallKey(key string) (*serf.KeyResponse, error) { a.logger.Print("[INFO] agent: Initiating key installation") manager := a.serf.KeyManager() return manager.InstallKey(key) } // UseKey sends a query instructing all members to switch primary keys func (a *Agent) UseKey(key string) (*serf.KeyResponse, error) { a.logger.Print("[INFO] agent: Initiating primary key change") manager := a.serf.KeyManager() return manager.UseKey(key) } // RemoveKey sends a query to all members to remove a key from the keyring func (a *Agent) RemoveKey(key string) (*serf.KeyResponse, error) { a.logger.Print("[INFO] agent: Initiating key removal") manager := a.serf.KeyManager() return manager.RemoveKey(key) } // ListKeys sends a query to all members to return a list of their keys func (a *Agent) ListKeys() (*serf.KeyResponse, error) { a.logger.Print("[INFO] agent: Initiating key listing") manager := a.serf.KeyManager() return manager.ListKeys() } // SetTags is used to update the tags. The agent will make sure to // persist tags if necessary before gossiping to the cluster. func (a *Agent) SetTags(tags map[string]string) error { // Update the tags file if we have one if a.agentConf.TagsFile != "" { if err := a.writeTagsFile(tags); err != nil { a.logger.Printf("[ERR] agent: %s", err) return err } } // Set the tags in Serf, start gossiping out return a.serf.SetTags(tags) } // loadTagsFile will load agent tags out of a file and set them in the // current serf configuration. func (a *Agent) loadTagsFile(tagsFile string) error { // Avoid passing tags and using a tags file at the same time if len(a.agentConf.Tags) > 0 { return fmt.Errorf("Tags config not allowed while using tag files") } if _, err := os.Stat(tagsFile); err == nil { tagData, err := ioutil.ReadFile(tagsFile) if err != nil { return fmt.Errorf("Failed to read tags file: %s", err) } if err := json.Unmarshal(tagData, &a.conf.Tags); err != nil { return fmt.Errorf("Failed to decode tags file: %s", err) } a.logger.Printf("[INFO] agent: Restored %d tag(s) from %s", len(a.conf.Tags), tagsFile) } // Success! return nil } // writeTagsFile will write the current tags to the configured tags file. func (a *Agent) writeTagsFile(tags map[string]string) error { encoded, err := json.MarshalIndent(tags, "", " ") if err != nil { return fmt.Errorf("Failed to encode tags: %s", err) } // Use 0600 for permissions, in case tag data is sensitive if err = ioutil.WriteFile(a.agentConf.TagsFile, encoded, 0600); err != nil { return fmt.Errorf("Failed to write tags file: %s", err) } // Success! return nil } // MarshalTags is a utility function which takes a map of tag key/value pairs // and returns the same tags as strings in 'key=value' format. func MarshalTags(tags map[string]string) []string { var result []string for name, value := range tags { result = append(result, fmt.Sprintf("%s=%s", name, value)) } return result } // UnmarshalTags is a utility function which takes a slice of strings in // key=value format and returns them as a tag mapping. func UnmarshalTags(tags []string) (map[string]string, error) { result := make(map[string]string) for _, tag := range tags { parts := strings.SplitN(tag, "=", 2) if len(parts) != 2 || len(parts[0]) == 0 { return nil, fmt.Errorf("Invalid tag: '%s'", tag) } result[parts[0]] = parts[1] } return result, nil } // loadKeyringFile will load a keyring out of a file func (a *Agent) loadKeyringFile(keyringFile string) error { // Avoid passing an encryption key and a keyring file at the same time if len(a.agentConf.EncryptKey) > 0 { return fmt.Errorf("Encryption key not allowed while using a keyring") } if _, err := os.Stat(keyringFile); err != nil { return err } // Read in the keyring file data keyringData, err := ioutil.ReadFile(keyringFile) if err != nil { return fmt.Errorf("Failed to read keyring file: %s", err) } // Decode keyring JSON keys := make([]string, 0) if err := json.Unmarshal(keyringData, &keys); err != nil { return fmt.Errorf("Failed to decode keyring file: %s", err) } // Decode base64 values keysDecoded := make([][]byte, len(keys)) for i, key := range keys { keyBytes, err := base64.StdEncoding.DecodeString(key) if err != nil { return fmt.Errorf("Failed to decode key from keyring: %s", err) } keysDecoded[i] = keyBytes } // Guard against empty keyring file if len(keysDecoded) == 0 { return fmt.Errorf("Keyring file contains no keys") } // Create the keyring keyring, err := memberlist.NewKeyring(keysDecoded, keysDecoded[0]) if err != nil { return fmt.Errorf("Failed to restore keyring: %s", err) } a.conf.MemberlistConfig.Keyring = keyring a.logger.Printf("[INFO] agent: Restored keyring with %d keys from %s", len(keys), keyringFile) // Success! return nil } // Stats is used to get various runtime information and stats func (a *Agent) Stats() map[string]map[string]string { local := a.serf.LocalMember() output := map[string]map[string]string{ "agent": map[string]string{ "name": local.Name, }, "runtime": runtimeStats(), "serf": a.serf.Stats(), "tags": local.Tags, } return output } serf-0.6.4/command/agent/agent_test.go000066400000000000000000000135341246721563000176740ustar00rootroot00000000000000package agent import ( "encoding/json" "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" "io/ioutil" "os" "path/filepath" "reflect" "strings" "testing" ) func TestAgent_eventHandler(t *testing.T) { a1 := testAgent(nil) defer a1.Shutdown() defer a1.Leave() handler := new(MockEventHandler) a1.RegisterEventHandler(handler) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if len(handler.Events) != 1 { t.Fatalf("bad: %#v", handler.Events) } if handler.Events[0].EventType() != serf.EventMemberJoin { t.Fatalf("bad: %#v", handler.Events[0]) } } func TestAgentShutdown_multiple(t *testing.T) { a := testAgent(nil) if err := a.Start(); err != nil { t.Fatalf("err: %s", err) } for i := 0; i < 5; i++ { if err := a.Shutdown(); err != nil { t.Fatalf("err: %s", err) } } } func TestAgentUserEvent(t *testing.T) { a1 := testAgent(nil) defer a1.Shutdown() defer a1.Leave() handler := new(MockEventHandler) a1.RegisterEventHandler(handler) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if err := a1.UserEvent("deploy", []byte("foo"), false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() handler.Lock() defer handler.Unlock() if len(handler.Events) == 0 { t.Fatal("no events") } e, ok := handler.Events[len(handler.Events)-1].(serf.UserEvent) if !ok { t.Fatalf("bad: %#v", e) } if e.Name != "deploy" { t.Fatalf("bad: %#v", e) } if string(e.Payload) != "foo" { t.Fatalf("bad: %#v", e) } } func TestAgentQuery_BadPrefix(t *testing.T) { a1 := testAgent(nil) defer a1.Shutdown() defer a1.Leave() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() _, err := a1.Query("_serf_test", nil, nil) if err == nil || !strings.Contains(err.Error(), "cannot contain") { t.Fatalf("err: %s", err) } } func TestAgentTagsFile(t *testing.T) { tags := map[string]string{ "role": "webserver", "datacenter": "us-east", } td, err := ioutil.TempDir("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer os.RemoveAll(td) agentConfig := DefaultConfig() agentConfig.TagsFile = filepath.Join(td, "tags.json") a1 := testAgentWithConfig(agentConfig, serf.DefaultConfig(), nil) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } defer a1.Shutdown() defer a1.Leave() testutil.Yield() err = a1.SetTags(tags) if err != nil { t.Fatalf("err: %s", err) } testutil.Yield() a2 := testAgentWithConfig(agentConfig, serf.DefaultConfig(), nil) if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } defer a2.Shutdown() defer a2.Leave() testutil.Yield() m := a2.Serf().LocalMember() if !reflect.DeepEqual(m.Tags, tags) { t.Fatalf("tags not restored: %#v", m.Tags) } } func TestAgentTagsFile_BadOptions(t *testing.T) { agentConfig := DefaultConfig() agentConfig.TagsFile = "/some/path" agentConfig.Tags = map[string]string{ "tag1": "val1", } _, err := Create(agentConfig, serf.DefaultConfig(), nil) if err == nil || !strings.Contains(err.Error(), "not allowed") { t.Fatalf("err: %s", err) } } func TestAgent_MarshalTags(t *testing.T) { tags := map[string]string{ "tag1": "val1", "tag2": "val2", } tagPairs := MarshalTags(tags) if !containsKey(tagPairs, "tag1=val1") { t.Fatalf("bad: %v", tagPairs) } if !containsKey(tagPairs, "tag2=val2") { t.Fatalf("bad: %v", tagPairs) } } func TestAgent_UnmarshalTags(t *testing.T) { tagPairs := []string{ "tag1=val1", "tag2=val2", } tags, err := UnmarshalTags(tagPairs) if err != nil { t.Fatalf("err: %s", err) } if v, ok := tags["tag1"]; !ok || v != "val1" { t.Fatalf("bad: %v", tags) } if v, ok := tags["tag2"]; !ok || v != "val2" { t.Fatalf("bad: %v", tags) } } func TestAgent_UnmarshalTagsError(t *testing.T) { tagSets := [][]string{ []string{"="}, []string{"=x"}, []string{""}, []string{"x"}, } for _, tagPairs := range tagSets { if _, err := UnmarshalTags(tagPairs); err == nil { t.Fatalf("Expected tag error: %s", tagPairs[0]) } } } func TestAgentKeyringFile(t *testing.T) { keys := []string{ "enjTwAFRe4IE71bOFhirzQ==", "csT9mxI7aTf9ap3HLBbdmA==", "noha2tVc0OyD/2LtCBoAOQ==", } td, err := ioutil.TempDir("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer os.RemoveAll(td) keyringFile := filepath.Join(td, "keyring.json") serfConfig := serf.DefaultConfig() agentConfig := DefaultConfig() agentConfig.KeyringFile = keyringFile encodedKeys, err := json.Marshal(keys) if err != nil { t.Fatalf("err: %s", err) } if err := ioutil.WriteFile(keyringFile, encodedKeys, 0600); err != nil { t.Fatalf("err: %s", err) } a1 := testAgentWithConfig(agentConfig, serfConfig, nil) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } defer a1.Shutdown() testutil.Yield() totalLoadedKeys := len(serfConfig.MemberlistConfig.Keyring.GetKeys()) if totalLoadedKeys != 3 { t.Fatalf("Expected to load 3 keys but got %d", totalLoadedKeys) } } func TestAgentKeyringFile_BadOptions(t *testing.T) { agentConfig := DefaultConfig() agentConfig.KeyringFile = "/some/path" agentConfig.EncryptKey = "pL4owv4IE1x+ZXCyd5vLLg==" _, err := Create(agentConfig, serf.DefaultConfig(), nil) if err == nil || !strings.Contains(err.Error(), "not allowed") { t.Fatalf("err: %s", err) } } func TestAgentKeyringFile_NoKeys(t *testing.T) { dir, err := ioutil.TempDir("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer os.RemoveAll(dir) keysFile := filepath.Join(dir, "keyring") if err := ioutil.WriteFile(keysFile, []byte("[]"), 0600); err != nil { t.Fatalf("err: %s", err) } agentConfig := DefaultConfig() agentConfig.KeyringFile = keysFile _, err = Create(agentConfig, serf.DefaultConfig(), nil) if err == nil { t.Fatalf("should have errored") } if !strings.Contains(err.Error(), "contains no keys") { t.Fatalf("bad: %s", err) } } serf-0.6.4/command/agent/command.go000066400000000000000000000575341246721563000171650ustar00rootroot00000000000000package agent import ( "flag" "fmt" "github.com/armon/go-metrics" "github.com/hashicorp/go-syslog" "github.com/hashicorp/logutils" "github.com/hashicorp/memberlist" "github.com/hashicorp/serf/serf" "github.com/mitchellh/cli" "io" "log" "net" "os" "os/signal" "runtime" "strings" "syscall" "time" ) const ( // gracefulTimeout controls how long we wait before forcefully terminating gracefulTimeout = 3 * time.Second // minRetryInterval applies a lower bound to the join retry interval minRetryInterval = time.Second ) // Command is a Command implementation that runs a Serf agent. // The command will not end unless a shutdown message is sent on the // ShutdownCh. If two messages are sent on the ShutdownCh it will forcibly // exit. type Command struct { Ui cli.Ui ShutdownCh <-chan struct{} args []string scriptHandler *ScriptEventHandler logFilter *logutils.LevelFilter logger *log.Logger } // readConfig is responsible for setup of our configuration using // the command line and any file configs func (c *Command) readConfig() *Config { var cmdConfig Config var configFiles []string var tags []string var retryInterval string cmdFlags := flag.NewFlagSet("agent", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.StringVar(&cmdConfig.BindAddr, "bind", "", "address to bind listeners to") cmdFlags.StringVar(&cmdConfig.AdvertiseAddr, "advertise", "", "address to advertise to cluster") cmdFlags.Var((*AppendSliceValue)(&configFiles), "config-file", "json file to read config from") cmdFlags.Var((*AppendSliceValue)(&configFiles), "config-dir", "directory of json files to read") cmdFlags.StringVar(&cmdConfig.EncryptKey, "encrypt", "", "encryption key") cmdFlags.StringVar(&cmdConfig.KeyringFile, "keyring-file", "", "path to the keyring file") cmdFlags.Var((*AppendSliceValue)(&cmdConfig.EventHandlers), "event-handler", "command to execute when events occur") cmdFlags.Var((*AppendSliceValue)(&cmdConfig.StartJoin), "join", "address of agent to join on startup") cmdFlags.BoolVar(&cmdConfig.ReplayOnJoin, "replay", false, "replay events for startup join") cmdFlags.StringVar(&cmdConfig.LogLevel, "log-level", "", "log level") cmdFlags.StringVar(&cmdConfig.NodeName, "node", "", "node name") cmdFlags.IntVar(&cmdConfig.Protocol, "protocol", -1, "protocol version") cmdFlags.StringVar(&cmdConfig.Role, "role", "", "role name") cmdFlags.StringVar(&cmdConfig.RPCAddr, "rpc-addr", "", "address to bind RPC listener to") cmdFlags.StringVar(&cmdConfig.Profile, "profile", "", "timing profile to use (lan, wan, local)") cmdFlags.StringVar(&cmdConfig.SnapshotPath, "snapshot", "", "path to the snapshot file") cmdFlags.Var((*AppendSliceValue)(&tags), "tag", "tag pair, specified as key=value") cmdFlags.StringVar(&cmdConfig.Discover, "discover", "", "mDNS discovery name") cmdFlags.StringVar(&cmdConfig.Interface, "iface", "", "interface to bind to") cmdFlags.StringVar(&cmdConfig.TagsFile, "tags-file", "", "tag persistence file") cmdFlags.BoolVar(&cmdConfig.EnableSyslog, "syslog", false, "enable logging to syslog facility") cmdFlags.Var((*AppendSliceValue)(&cmdConfig.RetryJoin), "retry-join", "address of agent to join on startup with retry") cmdFlags.IntVar(&cmdConfig.RetryMaxAttempts, "retry-max", 0, "maximum retry join attempts") cmdFlags.StringVar(&retryInterval, "retry-interval", "", "retry join interval") cmdFlags.BoolVar(&cmdConfig.RejoinAfterLeave, "rejoin", false, "enable re-joining after a previous leave") if err := cmdFlags.Parse(c.args); err != nil { return nil } // Parse any command line tag values tagValues, err := UnmarshalTags(tags) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return nil } cmdConfig.Tags = tagValues // Decode the interval if given if retryInterval != "" { dur, err := time.ParseDuration(retryInterval) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return nil } cmdConfig.RetryInterval = dur } config := DefaultConfig() if len(configFiles) > 0 { fileConfig, err := ReadConfigPaths(configFiles) if err != nil { c.Ui.Error(err.Error()) return nil } config = MergeConfig(config, fileConfig) } config = MergeConfig(config, &cmdConfig) if config.NodeName == "" { hostname, err := os.Hostname() if err != nil { c.Ui.Error(fmt.Sprintf("Error determining hostname: %s", err)) return nil } config.NodeName = hostname } eventScripts := config.EventScripts() for _, script := range eventScripts { if !script.Valid() { c.Ui.Error(fmt.Sprintf("Invalid event script: %s", script.String())) return nil } } // Check for a valid interface if _, err := config.NetworkInterface(); err != nil { c.Ui.Error(fmt.Sprintf("Invalid network interface: %s", err)) return nil } // Backward compatibility hack for 'Role' if config.Role != "" { c.Ui.Output("Deprecation warning: 'Role' has been replaced with 'Tags'") config.Tags["role"] = config.Role } // Check for sane retry interval if config.RetryInterval < minRetryInterval { c.Ui.Output(fmt.Sprintf("Warning: 'RetryInterval' is too low. Setting to %v", config.RetryInterval)) config.RetryInterval = minRetryInterval } // Check snapshot file is provided if we have RejoinAfterLeave if config.RejoinAfterLeave && config.SnapshotPath == "" { c.Ui.Output("Warning: 'RejoinAfterLeave' enabled without snapshot file") } return config } // setupAgent is used to create the agent we use func (c *Command) setupAgent(config *Config, logOutput io.Writer) *Agent { bindIP, bindPort, err := config.AddrParts(config.BindAddr) if err != nil { c.Ui.Error(fmt.Sprintf("Invalid bind address: %s", err)) return nil } // Check if we have an interface if iface, _ := config.NetworkInterface(); iface != nil { addrs, err := iface.Addrs() if err != nil { c.Ui.Error(fmt.Sprintf("Failed to get interface addresses: %s", err)) return nil } if len(addrs) == 0 { c.Ui.Error(fmt.Sprintf("Interface '%s' has no addresses", config.Interface)) return nil } // If there is no bind IP, pick an address if bindIP == "0.0.0.0" { found := false for _, a := range addrs { var addrIP net.IP if runtime.GOOS == "windows" { // Waiting for https://github.com/golang/go/issues/5395 to use IPNet only addr, ok := a.(*net.IPAddr) if !ok { continue } addrIP = addr.IP } else { addr, ok := a.(*net.IPNet) if !ok { continue } addrIP = addr.IP } // Skip self-assigned IPs if addrIP.IsLinkLocalUnicast() { continue } // Found an IP found = true bindIP = addrIP.String() c.Ui.Output(fmt.Sprintf("Using interface '%s' address '%s'", config.Interface, bindIP)) // Update the configuration bindAddr := &net.TCPAddr{ IP: net.ParseIP(bindIP), Port: bindPort, } config.BindAddr = bindAddr.String() break } if !found { c.Ui.Error(fmt.Sprintf("Failed to find usable address for interface '%s'", config.Interface)) return nil } } else { // If there is a bind IP, ensure it is available found := false for _, a := range addrs { addr, ok := a.(*net.IPNet) if !ok { continue } if addr.IP.String() == bindIP { found = true break } } if !found { c.Ui.Error(fmt.Sprintf("Interface '%s' has no '%s' address", config.Interface, bindIP)) return nil } } } var advertiseIP string var advertisePort int if config.AdvertiseAddr != "" { advertiseIP, advertisePort, err = config.AddrParts(config.AdvertiseAddr) if err != nil { c.Ui.Error(fmt.Sprintf("Invalid advertise address: %s", err)) return nil } } encryptKey, err := config.EncryptBytes() if err != nil { c.Ui.Error(fmt.Sprintf("Invalid encryption key: %s", err)) return nil } serfConfig := serf.DefaultConfig() switch config.Profile { case "lan": serfConfig.MemberlistConfig = memberlist.DefaultLANConfig() case "wan": serfConfig.MemberlistConfig = memberlist.DefaultWANConfig() case "local": serfConfig.MemberlistConfig = memberlist.DefaultLocalConfig() default: c.Ui.Error(fmt.Sprintf("Unknown profile: %s", config.Profile)) return nil } serfConfig.MemberlistConfig.BindAddr = bindIP serfConfig.MemberlistConfig.BindPort = bindPort serfConfig.MemberlistConfig.AdvertiseAddr = advertiseIP serfConfig.MemberlistConfig.AdvertisePort = advertisePort serfConfig.MemberlistConfig.SecretKey = encryptKey serfConfig.NodeName = config.NodeName serfConfig.Tags = config.Tags serfConfig.SnapshotPath = config.SnapshotPath serfConfig.ProtocolVersion = uint8(config.Protocol) serfConfig.CoalescePeriod = 3 * time.Second serfConfig.QuiescentPeriod = time.Second serfConfig.UserCoalescePeriod = 3 * time.Second serfConfig.UserQuiescentPeriod = time.Second if config.ReconnectInterval != 0 { serfConfig.ReconnectInterval = config.ReconnectInterval } if config.ReconnectTimeout != 0 { serfConfig.ReconnectTimeout = config.ReconnectTimeout } if config.TombstoneTimeout != 0 { serfConfig.TombstoneTimeout = config.TombstoneTimeout } serfConfig.EnableNameConflictResolution = !config.DisableNameResolution if config.KeyringFile != "" { serfConfig.KeyringFile = config.KeyringFile } serfConfig.RejoinAfterLeave = config.RejoinAfterLeave // Start Serf c.Ui.Output("Starting Serf agent...") agent, err := Create(config, serfConfig, logOutput) if err != nil { c.Ui.Error(fmt.Sprintf("Failed to start the Serf agent: %v", err)) return nil } return agent } // setupLoggers is used to setup the logGate, logWriter, and our logOutput func (c *Command) setupLoggers(config *Config) (*GatedWriter, *logWriter, io.Writer) { // Setup logging. First create the gated log writer, which will // store logs until we're ready to show them. Then create the level // filter, filtering logs of the specified level. logGate := &GatedWriter{ Writer: &cli.UiWriter{Ui: c.Ui}, } c.logFilter = LevelFilter() c.logFilter.MinLevel = logutils.LogLevel(strings.ToUpper(config.LogLevel)) c.logFilter.Writer = logGate if !ValidateLevelFilter(c.logFilter.MinLevel, c.logFilter) { c.Ui.Error(fmt.Sprintf( "Invalid log level: %s. Valid log levels are: %v", c.logFilter.MinLevel, c.logFilter.Levels)) return nil, nil, nil } // Check if syslog is enabled var syslog io.Writer if config.EnableSyslog { l, err := gsyslog.NewLogger(gsyslog.LOG_NOTICE, config.SyslogFacility, "serf") if err != nil { c.Ui.Error(fmt.Sprintf("Syslog setup failed: %v", err)) return nil, nil, nil } syslog = &SyslogWrapper{l} } // Create a log writer, and wrap a logOutput around it logWriter := NewLogWriter(512) var logOutput io.Writer if syslog != nil { logOutput = io.MultiWriter(c.logFilter, logWriter, syslog) } else { logOutput = io.MultiWriter(c.logFilter, logWriter) } // Create a logger c.logger = log.New(logOutput, "", log.LstdFlags) return logGate, logWriter, logOutput } // startAgent is used to start the agent and IPC func (c *Command) startAgent(config *Config, agent *Agent, logWriter *logWriter, logOutput io.Writer) *AgentIPC { // Add the script event handlers c.scriptHandler = &ScriptEventHandler{ SelfFunc: func() serf.Member { return agent.Serf().LocalMember() }, Scripts: config.EventScripts(), Logger: log.New(logOutput, "", log.LstdFlags), } agent.RegisterEventHandler(c.scriptHandler) // Start the agent after the handler is registered if err := agent.Start(); err != nil { c.Ui.Error(fmt.Sprintf("Failed to start the Serf agent: %v", err)) return nil } // Parse the bind address information bindIP, bindPort, err := config.AddrParts(config.BindAddr) bindAddr := &net.TCPAddr{IP: net.ParseIP(bindIP), Port: bindPort} // Start the discovery layer if config.Discover != "" { // Use the advertise addr and port local := agent.Serf().Memberlist().LocalNode() // Get the bind interface if any iface, _ := config.NetworkInterface() _, err := NewAgentMDNS(agent, logOutput, config.ReplayOnJoin, config.NodeName, config.Discover, iface, local.Addr, int(local.Port)) if err != nil { c.Ui.Error(fmt.Sprintf("Error starting mDNS listener: %s", err)) return nil } } // Setup the RPC listener rpcListener, err := net.Listen("tcp", config.RPCAddr) if err != nil { c.Ui.Error(fmt.Sprintf("Error starting RPC listener: %s", err)) return nil } // Start the IPC layer c.Ui.Output("Starting Serf agent RPC...") ipc := NewAgentIPC(agent, config.RPCAuthKey, rpcListener, logOutput, logWriter) c.Ui.Output("Serf agent running!") c.Ui.Info(fmt.Sprintf(" Node name: '%s'", config.NodeName)) c.Ui.Info(fmt.Sprintf(" Bind addr: '%s'", bindAddr.String())) if config.AdvertiseAddr != "" { advertiseIP, advertisePort, _ := config.AddrParts(config.AdvertiseAddr) advertiseAddr := (&net.TCPAddr{IP: net.ParseIP(advertiseIP), Port: advertisePort}).String() c.Ui.Info(fmt.Sprintf("Advertise addr: '%s'", advertiseAddr)) } c.Ui.Info(fmt.Sprintf(" RPC addr: '%s'", config.RPCAddr)) c.Ui.Info(fmt.Sprintf(" Encrypted: %#v", agent.serf.EncryptionEnabled())) c.Ui.Info(fmt.Sprintf(" Snapshot: %v", config.SnapshotPath != "")) c.Ui.Info(fmt.Sprintf(" Profile: %s", config.Profile)) if config.Discover != "" { c.Ui.Info(fmt.Sprintf(" mDNS cluster: %s", config.Discover)) } return ipc } // startupJoin is invoked to handle any joins specified to take place at start time func (c *Command) startupJoin(config *Config, agent *Agent) error { if len(config.StartJoin) == 0 { return nil } c.Ui.Output(fmt.Sprintf("Joining cluster...(replay: %v)", config.ReplayOnJoin)) n, err := agent.Join(config.StartJoin, config.ReplayOnJoin) if err != nil { return err } c.Ui.Info(fmt.Sprintf("Join completed. Synced with %d initial agents", n)) return nil } // retryJoin is invoked to handle joins with retries. This runs until at least a // single successful join or RetryMaxAttempts is reached func (c *Command) retryJoin(config *Config, agent *Agent, errCh chan struct{}) { // Quit fast if there is no nodes to join if len(config.RetryJoin) == 0 { return } // Track the number of join attempts attempt := 0 for { // Try to perform the join c.logger.Printf("[INFO] agent: Joining cluster...(replay: %v)", config.ReplayOnJoin) n, err := agent.Join(config.RetryJoin, config.ReplayOnJoin) if err == nil { c.logger.Printf("[INFO] agent: Join completed. Synced with %d initial agents", n) return } // Check if the maximum attempts has been exceeded attempt++ if config.RetryMaxAttempts > 0 && attempt > config.RetryMaxAttempts { c.logger.Printf("[ERR] agent: maximum retry join attempts made, exiting") close(errCh) return } // Log the failure and sleep c.logger.Printf("[WARN] agent: Join failed: %v, retrying in %v", err, config.RetryInterval) time.Sleep(config.RetryInterval) } } func (c *Command) Run(args []string) int { c.Ui = &cli.PrefixedUi{ OutputPrefix: "==> ", InfoPrefix: " ", ErrorPrefix: "==> ", Ui: c.Ui, } // Parse our configs c.args = args config := c.readConfig() if config == nil { return 1 } // Setup the log outputs logGate, logWriter, logOutput := c.setupLoggers(config) if logWriter == nil { return 1 } /* Setup telemetry Aggregate on 10 second intervals for 1 minute. Expose the metrics over stderr when there is a SIGUSR1 received. */ inm := metrics.NewInmemSink(10*time.Second, time.Minute) metrics.DefaultInmemSignal(inm) metricsConf := metrics.DefaultConfig("serf-agent") if config.StatsiteAddr != "" { sink, err := metrics.NewStatsiteSink(config.StatsiteAddr) if err != nil { c.Ui.Error(fmt.Sprintf("Failed to start statsite sink. Got: %s", err)) return 1 } fanout := metrics.FanoutSink{inm, sink} metrics.NewGlobal(metricsConf, fanout) } else { metricsConf.EnableHostname = false metrics.NewGlobal(metricsConf, inm) } // Setup serf agent := c.setupAgent(config, logOutput) if agent == nil { return 1 } defer agent.Shutdown() // Start the agent ipc := c.startAgent(config, agent, logWriter, logOutput) if ipc == nil { return 1 } defer ipc.Shutdown() // Join startup nodes if specified if err := c.startupJoin(config, agent); err != nil { c.Ui.Error(err.Error()) return 1 } // Enable log streaming c.Ui.Info("") c.Ui.Output("Log data will now stream in as it occurs:\n") logGate.Flush() // Start the retry joins retryJoinCh := make(chan struct{}) go c.retryJoin(config, agent, retryJoinCh) // Wait for exit return c.handleSignals(config, agent, retryJoinCh) } // handleSignals blocks until we get an exit-causing signal func (c *Command) handleSignals(config *Config, agent *Agent, retryJoin chan struct{}) int { signalCh := make(chan os.Signal, 4) signal.Notify(signalCh, os.Interrupt, syscall.SIGTERM, syscall.SIGHUP) // Wait for a signal WAIT: var sig os.Signal select { case s := <-signalCh: sig = s case <-c.ShutdownCh: sig = os.Interrupt case <-retryJoin: // Retry join failed! return 1 case <-agent.ShutdownCh(): // Agent is already shutdown! return 0 } c.Ui.Output(fmt.Sprintf("Caught signal: %v", sig)) // Check if this is a SIGHUP if sig == syscall.SIGHUP { config = c.handleReload(config, agent) goto WAIT } // Check if we should do a graceful leave graceful := false if sig == os.Interrupt && !config.SkipLeaveOnInt { graceful = true } else if sig == syscall.SIGTERM && config.LeaveOnTerm { graceful = true } // Bail fast if not doing a graceful leave if !graceful { return 1 } // Attempt a graceful leave gracefulCh := make(chan struct{}) c.Ui.Output("Gracefully shutting down agent...") go func() { if err := agent.Leave(); err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return } close(gracefulCh) }() // Wait for leave or another signal select { case <-signalCh: return 1 case <-time.After(gracefulTimeout): return 1 case <-gracefulCh: return 0 } } // handleReload is invoked when we should reload our configs, e.g. SIGHUP func (c *Command) handleReload(config *Config, agent *Agent) *Config { c.Ui.Output("Reloading configuration...") newConf := c.readConfig() if newConf == nil { c.Ui.Error(fmt.Sprintf("Failed to reload configs")) return config } // Change the log level minLevel := logutils.LogLevel(strings.ToUpper(newConf.LogLevel)) if ValidateLevelFilter(minLevel, c.logFilter) { c.logFilter.SetMinLevel(minLevel) } else { c.Ui.Error(fmt.Sprintf( "Invalid log level: %s. Valid log levels are: %v", minLevel, c.logFilter.Levels)) // Keep the current log level newConf.LogLevel = config.LogLevel } // Change the event handlers c.scriptHandler.UpdateScripts(newConf.EventScripts()) // Update the tags in serf if err := agent.SetTags(newConf.Tags); err != nil { c.Ui.Error(fmt.Sprintf("Failed to update tags: %v", err)) return newConf } return newConf } func (c *Command) Synopsis() string { return "Runs a Serf agent" } func (c *Command) Help() string { helpText := ` Usage: serf agent [options] Starts the Serf agent and runs until an interrupt is received. The agent represents a single node in a cluster. Options: -bind=0.0.0.0 Address to bind network listeners to -iface Network interface to bind to. Can be used instead of -bind if the interface is known but not the address. If both are provided, then Serf verifies that the interface has the bind address that is provided. This flag also sets the multicast device used for -discover. -advertise=0.0.0.0 Address to advertise to the other cluster members -config-file=foo Path to a JSON file to read configuration from. This can be specified multiple times. -config-dir=foo Path to a directory to read configuration files from. This will read every file ending in ".json" as configuration in this directory in alphabetical order. -discover=cluster Discover is set to enable mDNS discovery of peer. On networks that support multicast, this can be used to have peers join each other without an explicit join. -encrypt=foo Key for encrypting network traffic within Serf. Must be a base64-encoded 16-byte key. -keyring-file The keyring file is used to store encryption keys used by Serf. As encryption keys are changed, the content of this file is updated so that the same keys may be used during later agent starts. -event-handler=foo Script to execute when events occur. This can be specified multiple times. See the event scripts section below for more info. -join=addr An initial agent to join with. This flag can be specified multiple times. -log-level=info Log level of the agent. -node=hostname Name of this node. Must be unique in the cluster -profile=[lan|wan|local] Profile is used to control the timing profiles used in Serf. The default if not provided is lan. -protocol=n Serf protocol version to use. This defaults to the latest version, but can be set back for upgrades. -rejoin Ignores a previous leave and attempts to rejoin the cluster. Only works if provided along with a snapshot file. -retry-join=addr An agent to join with. This flag be specified multiple times. Does not exit on failure like -join, used to retry until success. -retry-interval=30s Sets the interval on which a node will attempt to retry joining nodes provided by -retry-join. Defaults to 30s. -retry-max=0 Limits the number of retry events. Defaults to 0 for unlimited. -role=foo The role of this node, if any. This can be used by event scripts to differentiate different types of nodes that may be part of the same cluster. '-role' is deprecated in favor of '-tag role=foo'. -rpc-addr=127.0.0.1:7373 Address to bind the RPC listener. -snapshot=path/to/file The snapshot file is used to store alive nodes and event information so that Serf can rejoin a cluster and avoid event replay on restart. -tag key=value Tag can be specified multiple times to attach multiple key/value tag pairs to the given node. -tags-file=/path/to/file The tags file is used to persist tag data. As an agent's tags are changed, the tags file will be updated. Tags can be reloaded during later agent starts. This option is incompatible with the '-tag' option and requires there be no tags in the agent configuration file, if given. -syslog When provided, logs will also be sent to syslog. Event handlers: For more information on what event handlers are, please read the Serf documentation. This section will document how to configure them on the command-line. There are three methods of specifying an event handler: - The value can be a plain script, such as "event.sh". In this case, Serf will send all events to this script, and you'll be responsible for differentiating between them based on the SERF_EVENT. - The value can be in the format of "TYPE=SCRIPT", such as "member-join=join.sh". With this format, Serf will only send events of that type to that script. - The value can be in the format of "user:EVENT=SCRIPT", such as "user:deploy=deploy.sh". This means that Serf will only invoke this script in the case of user events named "deploy". ` return strings.TrimSpace(helpText) } serf-0.6.4/command/agent/command_test.go000066400000000000000000000142041246721563000202070ustar00rootroot00000000000000package agent import ( "bytes" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/testutil" "github.com/mitchellh/cli" "log" "os" "testing" "time" ) func TestCommand_implements(t *testing.T) { var _ cli.Command = new(Command) } func TestCommandRun(t *testing.T) { shutdownCh := make(chan struct{}) defer close(shutdownCh) ui := new(cli.MockUi) c := &Command{ ShutdownCh: shutdownCh, Ui: ui, } args := []string{ "-bind", testutil.GetBindAddr().String(), "-rpc-addr", getRPCAddr(), } resultCh := make(chan int) go func() { resultCh <- c.Run(args) }() testutil.Yield() // Verify it runs "forever" select { case <-resultCh: t.Fatalf("ended too soon, err: %s", ui.ErrorWriter.String()) case <-time.After(50 * time.Millisecond): } // Send a shutdown request shutdownCh <- struct{}{} select { case code := <-resultCh: if code != 0 { t.Fatalf("bad code: %d", code) } case <-time.After(50 * time.Millisecond): t.Fatalf("timeout") } } func TestCommandRun_rpc(t *testing.T) { doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } rpcAddr := getRPCAddr() args := []string{ "-bind", testutil.GetBindAddr().String(), "-rpc-addr", rpcAddr, } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() testutil.Yield() client, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } defer client.Close() members, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(members) != 1 { t.Fatalf("bad: %#v", members) } } func TestCommandRun_join(t *testing.T) { a1 := testAgent(nil) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } defer a1.Shutdown() doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-bind", testutil.GetBindAddr().String(), "-join", a1.conf.MemberlistConfig.BindAddr, "-replay", } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() testutil.Yield() if len(a1.Serf().Members()) != 2 { t.Fatalf("bad: %#v", a1.Serf().Members()) } } func TestCommandRun_joinFail(t *testing.T) { shutdownCh := make(chan struct{}) defer close(shutdownCh) c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-bind", testutil.GetBindAddr().String(), "-join", testutil.GetBindAddr().String(), } code := c.Run(args) if code == 0 { t.Fatal("should fail") } } func TestCommandRun_advertiseAddr(t *testing.T) { doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } rpcAddr := getRPCAddr() args := []string{ "-bind", testutil.GetBindAddr().String(), "-rpc-addr", rpcAddr, "-advertise", "127.0.0.10:12345", } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() testutil.Yield() client, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } defer client.Close() members, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(members) != 1 { t.Fatalf("bad: %#v", members) } // Check the addr and port is as advertised! m := members[0] if bytes.Compare(m.Addr, []byte{127, 0, 0, 10}) != 0 { t.Fatalf("bad: %#v", m) } if m.Port != 12345 { t.Fatalf("bad: %#v", m) } } func TestCommandRun_mDNS(t *testing.T) { // mDNS does not work in travis if os.Getenv("TRAVIS") != "" { t.SkipNow() } // Start an agent doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-node", "foo", "-bind", testutil.GetBindAddr().String(), "-discover", "test", "-rpc-addr", getRPCAddr(), } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() // Start a second agent doneCh2 := make(chan struct{}) shutdownCh2 := make(chan struct{}) defer func() { close(shutdownCh2) <-doneCh2 }() c2 := &Command{ ShutdownCh: shutdownCh2, Ui: new(cli.MockUi), } addr2 := getRPCAddr() args2 := []string{ "-node", "bar", "-bind", testutil.GetBindAddr().String(), "-discover", "test", "-rpc-addr", addr2, } go func() { code := c2.Run(args2) if code != 0 { log.Printf("bad: %d", code) } close(doneCh2) }() time.Sleep(150 * time.Millisecond) client, err := client.NewRPCClient(addr2) if err != nil { t.Fatalf("err: %s", err) } defer client.Close() members, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(members) != 2 { t.Fatalf("bad: %#v", members) } } func TestCommandRun_retry_join(t *testing.T) { a1 := testAgent(nil) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } defer a1.Shutdown() doneCh := make(chan struct{}) shutdownCh := make(chan struct{}) defer func() { close(shutdownCh) <-doneCh }() c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-bind", testutil.GetBindAddr().String(), "-retry-join", a1.conf.MemberlistConfig.BindAddr, "-replay", } go func() { code := c.Run(args) if code != 0 { log.Printf("bad: %d", code) } close(doneCh) }() testutil.Yield() if len(a1.Serf().Members()) != 2 { t.Fatalf("bad: %#v", a1.Serf().Members()) } } func TestCommandRun_retry_joinFail(t *testing.T) { shutdownCh := make(chan struct{}) defer close(shutdownCh) c := &Command{ ShutdownCh: shutdownCh, Ui: new(cli.MockUi), } args := []string{ "-bind", testutil.GetBindAddr().String(), "-retry-join", testutil.GetBindAddr().String(), "-retry-interval", "1s", "-retry-max", "1", } code := c.Run(args) if code == 0 { t.Fatal("should fail") } } serf-0.6.4/command/agent/config.go000066400000000000000000000371421246721563000170050ustar00rootroot00000000000000package agent import ( "encoding/base64" "encoding/json" "fmt" "github.com/hashicorp/serf/serf" "github.com/mitchellh/mapstructure" "io" "net" "os" "path/filepath" "sort" "strings" "time" ) // This is the default port that we use for Serf communication const DefaultBindPort int = 7946 // DefaultConfig contains the defaults for configurations. func DefaultConfig() *Config { return &Config{ Tags: make(map[string]string), BindAddr: "0.0.0.0", AdvertiseAddr: "", LogLevel: "INFO", RPCAddr: "127.0.0.1:7373", Protocol: serf.ProtocolVersionMax, ReplayOnJoin: false, Profile: "lan", RetryInterval: 30 * time.Second, SyslogFacility: "LOCAL0", } } type dirEnts []os.FileInfo // Config is the configuration that can be set for an Agent. Some of these // configurations are exposed as command-line flags to `serf agent`, whereas // many of the more advanced configurations can only be set by creating // a configuration file. type Config struct { // All the configurations in this section are identical to their // Serf counterparts. See the documentation for Serf.Config for // more info. NodeName string `mapstructure:"node_name"` Role string `mapstructure:"role"` // Tags are used to attach key/value metadata to a node. They have // replaced 'Role' as a more flexible meta data mechanism. For compatibility, // the 'role' key is special, and is used for backwards compatibility. Tags map[string]string `mapstructure:"tags"` // TagsFile is the path to a file where Serf can store its tags. Tag // persistence is desirable since tags may be set or deleted while the // agent is running. Tags can be reloaded from this file on later starts. TagsFile string `mapstructure:"tags_file"` // BindAddr is the address that the Serf agent's communication ports // will bind to. Serf will use this address to bind to for both TCP // and UDP connections. If no port is present in the address, the default // port will be used. BindAddr string `mapstructure:"bind"` // AdvertiseAddr is the address that the Serf agent will advertise to // other members of the cluster. Can be used for basic NAT traversal // where both the internal ip:port and external ip:port are known. AdvertiseAddr string `mapstructure:"advertise"` // EncryptKey is the secret key to use for encrypting communication // traffic for Serf. The secret key must be exactly 16-bytes, base64 // encoded. The easiest way to do this on Unix machines is this command: // "head -c16 /dev/urandom | base64". If this is not specified, the // traffic will not be encrypted. EncryptKey string `mapstructure:"encrypt_key"` // KeyringFile is the path to a file containing a serialized keyring. // The keyring is used to facilitate encryption. KeyringFile string `mapstructure:"keyring_file"` // LogLevel is the level of the logs to output. // This can be updated during a reload. LogLevel string `mapstructure:"log_level"` // RPCAddr is the address and port to listen on for the agent's RPC // interface. RPCAddr string `mapstructure:"rpc_addr"` // RPCAuthKey is a key that can be set to optionally require that // RPC's provide an authentication key. This is meant to be // a very simple authentication control RPCAuthKey string `mapstructure:"rpc_auth"` // Protocol is the Serf protocol version to use. Protocol int `mapstructure:"protocol"` // ReplayOnJoin tells Serf to replay past user events // when joining based on a `StartJoin`. ReplayOnJoin bool `mapstructure:"replay_on_join"` // StartJoin is a list of addresses to attempt to join when the // agent starts. If Serf is unable to communicate with any of these // addresses, then the agent will error and exit. StartJoin []string `mapstructure:"start_join"` // EventHandlers is a list of event handlers that will be invoked. // These can be updated during a reload. EventHandlers []string `mapstructure:"event_handlers"` // Profile is used to select a timing profile for Serf. The supported choices // are "wan", "lan", and "local". The default is "lan" Profile string `mapstructure:"profile"` // SnapshotPath is used to allow Serf to snapshot important transactional // state to make a more graceful recovery possible. This enables auto // re-joining a cluster on failure and avoids old message replay. SnapshotPath string `mapstructure:"snapshot_path"` // LeaveOnTerm controls if Serf does a graceful leave when receiving // the TERM signal. Defaults false. This can be changed on reload. LeaveOnTerm bool `mapstructure:"leave_on_terminate"` // SkipLeaveOnInt controls if Serf skips a graceful leave when receiving // the INT signal. Defaults false. This can be changed on reload. SkipLeaveOnInt bool `mapstructure:"skip_leave_on_interrupt"` // Discover is used to setup an mDNS Discovery name. When this is set, the // agent will setup an mDNS responder and periodically run an mDNS query // to look for peers. For peers on a network that supports multicast, this // allows Serf agents to join each other with zero configuration. Discover string `mapstructure:"discover"` // Interface is used to provide a binding interface to use. It can be // used instead of providing a bind address, as Serf will discover the // address of the provided interface. It is also used to set the multicast // device used with `-discover`. Interface string `mapstructure:"interface"` // ReconnectIntervalRaw is the string reconnect interval time. This interval // controls how often we attempt to connect to a failed node. ReconnectIntervalRaw string `mapstructure:"reconnect_interval"` ReconnectInterval time.Duration `mapstructure:"-"` // ReconnectTimeoutRaw is the string reconnect timeout. This timeout controls // for how long we attempt to connect to a failed node before removing // it from the cluster. ReconnectTimeoutRaw string `mapstructure:"reconnect_timeout"` ReconnectTimeout time.Duration `mapstructure:"-"` // TombstoneTimeoutRaw is the string tombstone timeout. This timeout controls // for how long we remember a left node before removing it from the cluster. TombstoneTimeoutRaw string `mapstructure:"tombstone_timeout"` TombstoneTimeout time.Duration `mapstructure:"-"` // By default Serf will attempt to resolve name conflicts. This is done by // determining which node the majority believe to be the proper node, and // by having the minority node shutdown. If you want to disable this behavior, // then this flag can be set to true. DisableNameResolution bool `mapstructure:"disable_name_resolution"` // EnableSyslog is used to also tee all the logs over to syslog. Only supported // on linux and OSX. Other platforms will generate an error. EnableSyslog bool `mapstructure:"enable_syslog"` // SyslogFacility is used to control which syslog facility messages are // sent to. Defaults to LOCAL0. SyslogFacility string `mapstructure:"syslog_facility"` // RetryJoin is a list of addresses to attempt to join when the // agent starts. Serf will continue to retry the join until it // succeeds or RetryMaxAttempts is reached. RetryJoin []string `mapstructure:"retry_join"` // RetryMaxAttempts is used to limit the maximum attempts made // by RetryJoin to reach other nodes. If this is 0, then no limit // is imposed, and Serf will continue to try forever. Defaults to 0. RetryMaxAttempts int `mapstructure:"retry_max_attempts"` // RetryIntervalRaw is the string retry interval. This interval // controls how often we retry the join for RetryJoin. This defaults // to 30 seconds. RetryIntervalRaw string `mapstructure:"retry_interval"` RetryInterval time.Duration `mapstructure:"-"` // RejoinAfterLeave controls our interaction with the snapshot file. // When set to false (default), a leave causes a Serf to not rejoin // the cluster until an explicit join is received. If this is set to // true, we ignore the leave, and rejoin the cluster on start. This // only has an affect if the snapshot file is enabled. RejoinAfterLeave bool `mapstructure:"rejoin_after_leave"` // StatsiteAddr is the address of a statsite instance. If provided, // metrics will be streamed to that instance. StatsiteAddr string `mapstructure:"statsite_addr"` } // BindAddrParts returns the parts of the BindAddr that should be // used to configure Serf. func (c *Config) AddrParts(address string) (string, int, error) { checkAddr := address START: _, _, err := net.SplitHostPort(checkAddr) if ae, ok := err.(*net.AddrError); ok && ae.Err == "missing port in address" { checkAddr = fmt.Sprintf("%s:%d", checkAddr, DefaultBindPort) goto START } if err != nil { return "", 0, err } // Get the address addr, err := net.ResolveTCPAddr("tcp", checkAddr) if err != nil { return "", 0, err } return addr.IP.String(), addr.Port, nil } // EncryptBytes returns the encryption key configured. func (c *Config) EncryptBytes() ([]byte, error) { return base64.StdEncoding.DecodeString(c.EncryptKey) } // EventScripts returns the list of EventScripts associated with this // configuration and specified by the "event_handlers" configuration. func (c *Config) EventScripts() []EventScript { result := make([]EventScript, 0, len(c.EventHandlers)) for _, v := range c.EventHandlers { part := ParseEventScript(v) result = append(result, part...) } return result } // Networkinterface is used to get the associated network // interface from the configured value func (c *Config) NetworkInterface() (*net.Interface, error) { if c.Interface == "" { return nil, nil } return net.InterfaceByName(c.Interface) } // DecodeConfig reads the configuration from the given reader in JSON // format and decodes it into a proper Config structure. func DecodeConfig(r io.Reader) (*Config, error) { var raw interface{} dec := json.NewDecoder(r) if err := dec.Decode(&raw); err != nil { return nil, err } // Decode var md mapstructure.Metadata var result Config msdec, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{ Metadata: &md, Result: &result, ErrorUnused: true, }) if err != nil { return nil, err } if err := msdec.Decode(raw); err != nil { return nil, err } // Decode the time values if result.ReconnectIntervalRaw != "" { dur, err := time.ParseDuration(result.ReconnectIntervalRaw) if err != nil { return nil, err } result.ReconnectInterval = dur } if result.ReconnectTimeoutRaw != "" { dur, err := time.ParseDuration(result.ReconnectTimeoutRaw) if err != nil { return nil, err } result.ReconnectTimeout = dur } if result.TombstoneTimeoutRaw != "" { dur, err := time.ParseDuration(result.TombstoneTimeoutRaw) if err != nil { return nil, err } result.TombstoneTimeout = dur } if result.RetryIntervalRaw != "" { dur, err := time.ParseDuration(result.RetryIntervalRaw) if err != nil { return nil, err } result.RetryInterval = dur } return &result, nil } // containsKey is used to check if a slice of string keys contains // another key func containsKey(keys []string, key string) bool { for _, k := range keys { if k == key { return true } } return false } // MergeConfig merges two configurations together to make a single new // configuration. func MergeConfig(a, b *Config) *Config { var result Config = *a // Copy the strings if they're set if b.NodeName != "" { result.NodeName = b.NodeName } if b.Role != "" { result.Role = b.Role } if b.Tags != nil { if result.Tags == nil { result.Tags = make(map[string]string) } for name, value := range b.Tags { result.Tags[name] = value } } if b.BindAddr != "" { result.BindAddr = b.BindAddr } if b.AdvertiseAddr != "" { result.AdvertiseAddr = b.AdvertiseAddr } if b.EncryptKey != "" { result.EncryptKey = b.EncryptKey } if b.LogLevel != "" { result.LogLevel = b.LogLevel } if b.Protocol > 0 { result.Protocol = b.Protocol } if b.RPCAddr != "" { result.RPCAddr = b.RPCAddr } if b.RPCAuthKey != "" { result.RPCAuthKey = b.RPCAuthKey } if b.ReplayOnJoin != false { result.ReplayOnJoin = b.ReplayOnJoin } if b.Profile != "" { result.Profile = b.Profile } if b.SnapshotPath != "" { result.SnapshotPath = b.SnapshotPath } if b.LeaveOnTerm == true { result.LeaveOnTerm = true } if b.SkipLeaveOnInt == true { result.SkipLeaveOnInt = true } if b.Discover != "" { result.Discover = b.Discover } if b.Interface != "" { result.Interface = b.Interface } if b.ReconnectInterval != 0 { result.ReconnectInterval = b.ReconnectInterval } if b.ReconnectTimeout != 0 { result.ReconnectTimeout = b.ReconnectTimeout } if b.TombstoneTimeout != 0 { result.TombstoneTimeout = b.TombstoneTimeout } if b.DisableNameResolution { result.DisableNameResolution = true } if b.TagsFile != "" { result.TagsFile = b.TagsFile } if b.KeyringFile != "" { result.KeyringFile = b.KeyringFile } if b.EnableSyslog { result.EnableSyslog = true } if b.RetryMaxAttempts != 0 { result.RetryMaxAttempts = b.RetryMaxAttempts } if b.RetryInterval != 0 { result.RetryInterval = b.RetryInterval } if b.RejoinAfterLeave { result.RejoinAfterLeave = true } if b.SyslogFacility != "" { result.SyslogFacility = b.SyslogFacility } if b.StatsiteAddr != "" { result.StatsiteAddr = b.StatsiteAddr } // Copy the event handlers result.EventHandlers = make([]string, 0, len(a.EventHandlers)+len(b.EventHandlers)) result.EventHandlers = append(result.EventHandlers, a.EventHandlers...) result.EventHandlers = append(result.EventHandlers, b.EventHandlers...) // Copy the start join addresses result.StartJoin = make([]string, 0, len(a.StartJoin)+len(b.StartJoin)) result.StartJoin = append(result.StartJoin, a.StartJoin...) result.StartJoin = append(result.StartJoin, b.StartJoin...) // Copy the retry join addresses result.RetryJoin = make([]string, 0, len(a.RetryJoin)+len(b.RetryJoin)) result.RetryJoin = append(result.RetryJoin, a.RetryJoin...) result.RetryJoin = append(result.RetryJoin, b.RetryJoin...) return &result } // ReadConfigPaths reads the paths in the given order to load configurations. // The paths can be to files or directories. If the path is a directory, // we read one directory deep and read any files ending in ".json" as // configuration files. func ReadConfigPaths(paths []string) (*Config, error) { result := new(Config) for _, path := range paths { f, err := os.Open(path) if err != nil { return nil, fmt.Errorf("Error reading '%s': %s", path, err) } fi, err := f.Stat() if err != nil { f.Close() return nil, fmt.Errorf("Error reading '%s': %s", path, err) } if !fi.IsDir() { config, err := DecodeConfig(f) f.Close() if err != nil { return nil, fmt.Errorf("Error decoding '%s': %s", path, err) } result = MergeConfig(result, config) continue } contents, err := f.Readdir(-1) f.Close() if err != nil { return nil, fmt.Errorf("Error reading '%s': %s", path, err) } // Sort the contents, ensures lexical order sort.Sort(dirEnts(contents)) for _, fi := range contents { // Don't recursively read contents if fi.IsDir() { continue } // If it isn't a JSON file, ignore it if !strings.HasSuffix(fi.Name(), ".json") { continue } subpath := filepath.Join(path, fi.Name()) f, err := os.Open(subpath) if err != nil { return nil, fmt.Errorf("Error reading '%s': %s", subpath, err) } config, err := DecodeConfig(f) f.Close() if err != nil { return nil, fmt.Errorf("Error decoding '%s': %s", subpath, err) } result = MergeConfig(result, config) } } return result, nil } // Implement the sort interface for dirEnts func (d dirEnts) Len() int { return len(d) } func (d dirEnts) Less(i, j int) bool { return d[i].Name() < d[j].Name() } func (d dirEnts) Swap(i, j int) { d[i], d[j] = d[j], d[i] } serf-0.6.4/command/agent/config_test.go000066400000000000000000000255101246721563000200400ustar00rootroot00000000000000package agent import ( "bytes" "encoding/base64" "io/ioutil" "os" "path/filepath" "reflect" "testing" "time" ) func TestConfigBindAddrParts(t *testing.T) { testCases := []struct { Value string IP string Port int Error bool }{ {"0.0.0.0", "0.0.0.0", DefaultBindPort, false}, {"0.0.0.0:1234", "0.0.0.0", 1234, false}, } for _, tc := range testCases { c := &Config{BindAddr: tc.Value} ip, port, err := c.AddrParts(c.BindAddr) if tc.Error != (err != nil) { t.Errorf("Bad error: %s", err) continue } if tc.IP != ip { t.Errorf("%s: Got IP %#v", tc.Value, ip) continue } if tc.Port != port { t.Errorf("%s: Got port %d", tc.Value, port) continue } } } func TestConfigEncryptBytes(t *testing.T) { // Test with some input src := []byte("abc") c := &Config{ EncryptKey: base64.StdEncoding.EncodeToString(src), } result, err := c.EncryptBytes() if err != nil { t.Fatalf("err: %s", err) } if !bytes.Equal(src, result) { t.Fatalf("bad: %#v", result) } // Test with no input c = &Config{} result, err = c.EncryptBytes() if err != nil { t.Fatalf("err: %s", err) } if len(result) > 0 { t.Fatalf("bad: %#v", result) } } func TestConfigEventScripts(t *testing.T) { c := &Config{ EventHandlers: []string{ "foo.sh", "bar=blah.sh", }, } result := c.EventScripts() if len(result) != 2 { t.Fatalf("bad: %#v", result) } expected := []EventScript{ {EventFilter{"*", ""}, "foo.sh"}, {EventFilter{"bar", ""}, "blah.sh"}, } if !reflect.DeepEqual(result, expected) { t.Fatalf("bad: %#v", result) } } func TestDecodeConfig(t *testing.T) { // Without a protocol input := `{"node_name": "foo"}` config, err := DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.NodeName != "foo" { t.Fatalf("bad: %#v", config) } if config.Protocol != 0 { t.Fatalf("bad: %#v", config) } if config.SkipLeaveOnInt != DefaultConfig().SkipLeaveOnInt { t.Fatalf("bad: %#v", config) } if config.LeaveOnTerm != DefaultConfig().LeaveOnTerm { t.Fatalf("bad: %#v", config) } // With a protocol input = `{"node_name": "foo", "protocol": 7}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.NodeName != "foo" { t.Fatalf("bad: %#v", config) } if config.Protocol != 7 { t.Fatalf("bad: %#v", config) } // A bind addr input = `{"bind": "127.0.0.2"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.BindAddr != "127.0.0.2" { t.Fatalf("bad: %#v", config) } // replayOnJoin input = `{"replay_on_join": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.ReplayOnJoin != true { t.Fatalf("bad: %#v", config) } // leave_on_terminate input = `{"leave_on_terminate": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.LeaveOnTerm != true { t.Fatalf("bad: %#v", config) } // skip_leave_on_interrupt input = `{"skip_leave_on_interrupt": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.SkipLeaveOnInt != true { t.Fatalf("bad: %#v", config) } // tags input = `{"tags": {"foo": "bar", "role": "test"}}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.Tags["foo"] != "bar" { t.Fatalf("bad: %#v", config) } if config.Tags["role"] != "test" { t.Fatalf("bad: %#v", config) } // tags file input = `{"tags_file": "/some/path"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.TagsFile != "/some/path" { t.Fatalf("bad: %#v", config) } // Discover input = `{"discover": "foobar"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.Discover != "foobar" { t.Fatalf("bad: %#v", config) } // Interface input = `{"interface": "eth0"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.Interface != "eth0" { t.Fatalf("bad: %#v", config) } // Reconnect intervals input = `{"reconnect_interval": "15s", "reconnect_timeout": "48h"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.ReconnectInterval != 15*time.Second { t.Fatalf("bad: %#v", config) } if config.ReconnectTimeout != 48*time.Hour { t.Fatalf("bad: %#v", config) } // RPC Auth input = `{"rpc_auth": "foobar"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.RPCAuthKey != "foobar" { t.Fatalf("bad: %#v", config) } // DisableNameResolution input = `{"disable_name_resolution": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if !config.DisableNameResolution { t.Fatalf("bad: %#v", config) } // Tombstone intervals input = `{"tombstone_timeout": "48h"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.TombstoneTimeout != 48*time.Hour { t.Fatalf("bad: %#v", config) } // Syslog input = `{"enable_syslog": true, "syslog_facility": "LOCAL4"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if !config.EnableSyslog { t.Fatalf("bad: %#v", config) } if config.SyslogFacility != "LOCAL4" { t.Fatalf("bad: %#v", config) } // Retry configs input = `{"retry_max_attempts": 5, "retry_interval": "60s"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.RetryMaxAttempts != 5 { t.Fatalf("bad: %#v", config) } if config.RetryInterval != 60*time.Second { t.Fatalf("bad: %#v", config) } // Retry configs input = `{"retry_join": ["127.0.0.1", "127.0.0.2"]}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if len(config.RetryJoin) != 2 { t.Fatalf("bad: %#v", config) } if config.RetryJoin[0] != "127.0.0.1" { t.Fatalf("bad: %#v", config) } if config.RetryJoin[1] != "127.0.0.2" { t.Fatalf("bad: %#v", config) } // Rejoin configs input = `{"rejoin_after_leave": true}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if !config.RejoinAfterLeave { t.Fatalf("bad: %#v", config) } // Rejoin configs input = `{"statsite_addr": "127.0.0.1:8123"}` config, err = DecodeConfig(bytes.NewReader([]byte(input))) if err != nil { t.Fatalf("err: %s", err) } if config.StatsiteAddr != "127.0.0.1:8123" { t.Fatalf("bad: %#v", config) } } func TestDecodeConfig_unknownDirective(t *testing.T) { input := `{"unknown_directive": "titi"}` _, err := DecodeConfig(bytes.NewReader([]byte(input))) if err == nil { t.Fatal("should have err") } } func TestMergeConfig(t *testing.T) { a := &Config{ NodeName: "foo", Role: "bar", Protocol: 7, EventHandlers: []string{"foo"}, StartJoin: []string{"foo"}, ReplayOnJoin: true, RetryJoin: []string{"zab"}, } b := &Config{ NodeName: "bname", Protocol: -1, EncryptKey: "foo", EventHandlers: []string{"bar"}, StartJoin: []string{"bar"}, LeaveOnTerm: true, SkipLeaveOnInt: true, Discover: "tubez", Interface: "eth0", ReconnectInterval: 15 * time.Second, ReconnectTimeout: 48 * time.Hour, RPCAuthKey: "foobar", DisableNameResolution: true, TombstoneTimeout: 36 * time.Hour, EnableSyslog: true, RetryJoin: []string{"zip"}, RetryMaxAttempts: 10, RetryInterval: 120 * time.Second, RejoinAfterLeave: true, StatsiteAddr: "127.0.0.1:8125", } c := MergeConfig(a, b) if c.NodeName != "bname" { t.Fatalf("bad: %#v", c) } if c.Role != "bar" { t.Fatalf("bad: %#v", c) } if c.Protocol != 7 { t.Fatalf("bad: %#v", c) } if c.EncryptKey != "foo" { t.Fatalf("bad: %#v", c.EncryptKey) } if c.ReplayOnJoin != true { t.Fatalf("bad: %#v", c.ReplayOnJoin) } if !c.LeaveOnTerm { t.Fatalf("bad: %#v", c.LeaveOnTerm) } if !c.SkipLeaveOnInt { t.Fatalf("bad: %#v", c.SkipLeaveOnInt) } if c.Discover != "tubez" { t.Fatalf("Bad: %v", c.Discover) } if c.Interface != "eth0" { t.Fatalf("Bad: %v", c.Interface) } if c.ReconnectInterval != 15*time.Second { t.Fatalf("bad: %#v", c) } if c.ReconnectTimeout != 48*time.Hour { t.Fatalf("bad: %#v", c) } if c.TombstoneTimeout != 36*time.Hour { t.Fatalf("bad: %#v", c) } if c.RPCAuthKey != "foobar" { t.Fatalf("bad: %#v", c) } if !c.DisableNameResolution { t.Fatalf("bad: %#v", c) } if !c.EnableSyslog { t.Fatalf("bad: %#v", c) } if c.RetryMaxAttempts != 10 { t.Fatalf("bad: %#v", c) } if c.RetryInterval != 120*time.Second { t.Fatalf("bad: %#v", c) } if !c.RejoinAfterLeave { t.Fatalf("bad: %#v", c) } if c.StatsiteAddr != "127.0.0.1:8125" { t.Fatalf("bad: %#v", c) } expected := []string{"foo", "bar"} if !reflect.DeepEqual(c.EventHandlers, expected) { t.Fatalf("bad: %#v", c) } if !reflect.DeepEqual(c.StartJoin, expected) { t.Fatalf("bad: %#v", c) } expected = []string{"zab", "zip"} if !reflect.DeepEqual(c.RetryJoin, expected) { t.Fatalf("bad: %#v", c) } } func TestReadConfigPaths_badPath(t *testing.T) { _, err := ReadConfigPaths([]string{"/i/shouldnt/exist/ever/rainbows"}) if err == nil { t.Fatal("should have err") } } func TestReadConfigPaths_file(t *testing.T) { tf, err := ioutil.TempFile("", "serf") if err != nil { t.Fatalf("err: %s", err) } tf.Write([]byte(`{"node_name":"bar"}`)) tf.Close() defer os.Remove(tf.Name()) config, err := ReadConfigPaths([]string{tf.Name()}) if err != nil { t.Fatalf("err: %s", err) } if config.NodeName != "bar" { t.Fatalf("bad: %#v", config) } } func TestReadConfigPaths_dir(t *testing.T) { td, err := ioutil.TempDir("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer os.RemoveAll(td) err = ioutil.WriteFile(filepath.Join(td, "a.json"), []byte(`{"node_name": "bar"}`), 0644) if err != nil { t.Fatalf("err: %s", err) } err = ioutil.WriteFile(filepath.Join(td, "b.json"), []byte(`{"node_name": "baz"}`), 0644) if err != nil { t.Fatalf("err: %s", err) } // A non-json file, shouldn't be read err = ioutil.WriteFile(filepath.Join(td, "c"), []byte(`{"node_name": "bad"}`), 0644) if err != nil { t.Fatalf("err: %s", err) } config, err := ReadConfigPaths([]string{td}) if err != nil { t.Fatalf("err: %s", err) } if config.NodeName != "baz" { t.Fatalf("bad: %#v", config) } } serf-0.6.4/command/agent/event_handler.go000066400000000000000000000075601246721563000203570ustar00rootroot00000000000000package agent import ( "fmt" "github.com/hashicorp/serf/serf" "log" "os" "strings" "sync" ) // EventHandler is a handler that does things when events happen. type EventHandler interface { HandleEvent(serf.Event) } // ScriptEventHandler invokes scripts for the events that it receives. type ScriptEventHandler struct { SelfFunc func() serf.Member Scripts []EventScript Logger *log.Logger scriptLock sync.Mutex newScripts []EventScript } func (h *ScriptEventHandler) HandleEvent(e serf.Event) { // Swap in the new scripts if any h.scriptLock.Lock() if h.newScripts != nil { h.Scripts = h.newScripts h.newScripts = nil } h.scriptLock.Unlock() if h.Logger == nil { h.Logger = log.New(os.Stderr, "", log.LstdFlags) } self := h.SelfFunc() for _, script := range h.Scripts { if !script.Invoke(e) { continue } err := invokeEventScript(h.Logger, script.Script, self, e) if err != nil { h.Logger.Printf("[ERR] agent: Error invoking script '%s': %s", script.Script, err) } } } // UpdateScripts is used to safely update the scripts we invoke in // a thread safe manner func (h *ScriptEventHandler) UpdateScripts(scripts []EventScript) { h.scriptLock.Lock() defer h.scriptLock.Unlock() h.newScripts = scripts } // EventFilter is used to filter which events are processed type EventFilter struct { Event string Name string } // Invoke tests whether or not this event script should be invoked // for the given Serf event. func (s *EventFilter) Invoke(e serf.Event) bool { if s.Event == "*" { return true } if e.EventType().String() != s.Event { return false } if s.Event == "user" && s.Name != "" { userE, ok := e.(serf.UserEvent) if !ok { return false } if userE.Name != s.Name { return false } } if s.Event == "query" && s.Name != "" { query, ok := e.(*serf.Query) if !ok { return false } if query.Name != s.Name { return false } } return true } // Valid checks if this is a valid agent event script. func (s *EventFilter) Valid() bool { switch s.Event { case "member-join": case "member-leave": case "member-failed": case "member-update": case "member-reap": case "user": case "query": case "*": default: return false } return true } // EventScript is a single event script that will be executed in the // case of an event, and is configured from the command-line or from // a configuration file. type EventScript struct { EventFilter Script string } func (s *EventScript) String() string { if s.Name != "" { return fmt.Sprintf("Event '%s:%s' invoking '%s'", s.Event, s.Name, s.Script) } return fmt.Sprintf("Event '%s' invoking '%s'", s.Event, s.Script) } // ParseEventScript takes a string in the format of "type=script" and // parses it into an EventScript struct, if it can. func ParseEventScript(v string) []EventScript { var filter, script string parts := strings.SplitN(v, "=", 2) if len(parts) == 1 { script = parts[0] } else { filter = parts[0] script = parts[1] } filters := ParseEventFilter(filter) results := make([]EventScript, 0, len(filters)) for _, filt := range filters { result := EventScript{ EventFilter: filt, Script: script, } results = append(results, result) } return results } // ParseEventFilter a string with the event type filters and // parses it into a series of EventFilters if it can. func ParseEventFilter(v string) []EventFilter { // No filter translates to stream all if v == "" { v = "*" } events := strings.Split(v, ",") results := make([]EventFilter, 0, len(events)) for _, event := range events { var result EventFilter var name string if strings.HasPrefix(event, "user:") { name = event[len("user:"):] event = "user" } else if strings.HasPrefix(event, "query:") { name = event[len("query:"):] event = "query" } result.Event = event result.Name = name results = append(results, result) } return results } serf-0.6.4/command/agent/event_handler_mock.go000066400000000000000000000013551246721563000213640ustar00rootroot00000000000000package agent import ( "github.com/hashicorp/serf/serf" "sync" ) // MockEventHandler is an EventHandler implementation that can be used // for tests. type MockEventHandler struct { Events []serf.Event sync.Mutex } func (h *MockEventHandler) HandleEvent(e serf.Event) { h.Lock() defer h.Unlock() h.Events = append(h.Events, e) } // MockQueryHandler is an EventHandler implementation used for tests, // it always responds to a query with a given response type MockQueryHandler struct { Response []byte Queries []*serf.Query sync.Mutex } func (h *MockQueryHandler) HandleEvent(e serf.Event) { query, ok := e.(*serf.Query) if !ok { return } h.Lock() h.Queries = append(h.Queries, query) h.Unlock() query.Respond(h.Response) } serf-0.6.4/command/agent/event_handler_test.go000066400000000000000000000224161246721563000214130ustar00rootroot00000000000000package agent import ( "fmt" "github.com/hashicorp/serf/serf" "io/ioutil" "net" "os" "testing" ) const eventScript = `#!/bin/sh RESULT_FILE="%s" echo $SERF_SELF_NAME $SERF_SELF_ROLE >>${RESULT_FILE} echo $SERF_TAG_DC >> ${RESULT_FILE} echo $SERF_TAG_BAD_TAG >> ${RESULT_FILE} echo $SERF_EVENT $SERF_USER_EVENT "$@" >>${RESULT_FILE} echo $os_env_var >> ${RESULT_FILE} while read line; do printf "${line}\n" >>${RESULT_FILE} done ` const userEventScript = `#!/bin/sh RESULT_FILE="%s" echo $SERF_SELF_NAME $SERF_SELF_ROLE >>${RESULT_FILE} echo $SERF_TAG_DC >> ${RESULT_FILE} echo $SERF_EVENT $SERF_USER_EVENT "$@" >>${RESULT_FILE} echo $SERF_EVENT $SERF_USER_LTIME "$@" >>${RESULT_FILE} while read line; do printf "${line}\n" >>${RESULT_FILE} done ` const queryScript = `#!/bin/sh RESULT_FILE="%s" echo $SERF_SELF_NAME $SERF_SELF_ROLE >>${RESULT_FILE} echo $SERF_TAG_DC >> ${RESULT_FILE} echo $SERF_EVENT $SERF_QUERY_NAME "$@" >>${RESULT_FILE} echo $SERF_EVENT $SERF_QUERY_LTIME "$@" >>${RESULT_FILE} while read line; do printf "${line}\n" >>${RESULT_FILE} done ` // testEventScript creates an event script that can be used with the // agent. It returns the path to the event script itself and a path to // the file that will contain the events that that script receives. func testEventScript(t *testing.T, script string) (string, string) { scriptFile, err := ioutil.TempFile("", "serf") if err != nil { t.Fatalf("err: %s", err) } defer scriptFile.Close() if err := scriptFile.Chmod(0755); err != nil { t.Fatalf("err: %s", err) } resultFile, err := ioutil.TempFile("", "serf-result") if err != nil { t.Fatalf("err: %s", err) } defer resultFile.Close() _, err = scriptFile.Write([]byte( fmt.Sprintf(script, resultFile.Name()))) if err != nil { t.Fatalf("err: %s", err) } return scriptFile.Name(), resultFile.Name() } func TestScriptEventHandler(t *testing.T) { os.Setenv("os_env_var", "os-env-foo") script, results := testEventScript(t, eventScript) h := &ScriptEventHandler{ SelfFunc: func() serf.Member { return serf.Member{ Name: "ourname", Tags: map[string]string{"role": "ourrole", "dc": "east-aws", "bad-tag": "bad"}, } }, Scripts: []EventScript{ { EventFilter: EventFilter{ Event: "*", }, Script: script, }, }, } event := serf.MemberEvent{ Type: serf.EventMemberJoin, Members: []serf.Member{ { Name: "foo", Addr: net.ParseIP("1.2.3.4"), Tags: map[string]string{"role": "bar", "foo": "bar"}, }, }, } h.HandleEvent(event) result, err := ioutil.ReadFile(results) if err != nil { t.Fatalf("err: %s", err) } expected1 := "ourname ourrole\neast-aws\nbad\nmember-join\nos-env-foo\nfoo\t1.2.3.4\tbar\trole=bar,foo=bar\n" expected2 := "ourname ourrole\neast-aws\nbad\nmember-join\nos-env-foo\nfoo\t1.2.3.4\tbar\tfoo=bar,role=bar\n" if string(result) != expected1 && string(result) != expected2 { t.Fatalf("bad: %#v. Expected: %#v or %v", string(result), expected1, expected2) } } func TestScriptUserEventHandler(t *testing.T) { script, results := testEventScript(t, userEventScript) h := &ScriptEventHandler{ SelfFunc: func() serf.Member { return serf.Member{ Name: "ourname", Tags: map[string]string{"role": "ourrole", "dc": "east-aws"}, } }, Scripts: []EventScript{ { EventFilter: EventFilter{ Event: "*", }, Script: script, }, }, } userEvent := serf.UserEvent{ LTime: 1, Name: "baz", Payload: []byte("foobar"), Coalesce: true, } h.HandleEvent(userEvent) result, err := ioutil.ReadFile(results) if err != nil { t.Fatalf("err: %s", err) } expected := "ourname ourrole\neast-aws\nuser baz\nuser 1\nfoobar\n" if string(result) != expected { t.Fatalf("bad: %#v. Expected: %#v", string(result), expected) } } func TestScriptQueryEventHandler(t *testing.T) { script, results := testEventScript(t, queryScript) h := &ScriptEventHandler{ SelfFunc: func() serf.Member { return serf.Member{ Name: "ourname", Tags: map[string]string{"role": "ourrole", "dc": "east-aws"}, } }, Scripts: []EventScript{ { EventFilter: EventFilter{ Event: "*", }, Script: script, }, }, } query := &serf.Query{ LTime: 42, Name: "uptime", Payload: []byte("load average"), } h.HandleEvent(query) result, err := ioutil.ReadFile(results) if err != nil { t.Fatalf("err: %s", err) } expected := "ourname ourrole\neast-aws\nquery uptime\nquery 42\nload average\n" if string(result) != expected { t.Fatalf("bad: %#v. Expected: %#v", string(result), expected) } } func TestEventScriptInvoke(t *testing.T) { testCases := []struct { script EventScript event serf.Event invoke bool }{ { EventScript{EventFilter{"*", ""}, "script.sh"}, serf.MemberEvent{}, true, }, { EventScript{EventFilter{"user", ""}, "script.sh"}, serf.MemberEvent{}, false, }, { EventScript{EventFilter{"user", "deploy"}, "script.sh"}, serf.UserEvent{Name: "deploy"}, true, }, { EventScript{EventFilter{"user", "deploy"}, "script.sh"}, serf.UserEvent{Name: "restart"}, false, }, { EventScript{EventFilter{"member-join", ""}, "script.sh"}, serf.MemberEvent{Type: serf.EventMemberJoin}, true, }, { EventScript{EventFilter{"member-join", ""}, "script.sh"}, serf.MemberEvent{Type: serf.EventMemberLeave}, false, }, { EventScript{EventFilter{"member-reap", ""}, "script.sh"}, serf.MemberEvent{Type: serf.EventMemberReap}, true, }, { EventScript{EventFilter{"query", "deploy"}, "script.sh"}, &serf.Query{Name: "deploy"}, true, }, { EventScript{EventFilter{"query", "uptime"}, "script.sh"}, &serf.Query{Name: "deploy"}, false, }, { EventScript{EventFilter{"query", ""}, "script.sh"}, &serf.Query{Name: "deploy"}, true, }, } for _, tc := range testCases { result := tc.script.Invoke(tc.event) if result != tc.invoke { t.Errorf("bad: %#v", tc) } } } func TestEventScriptValid(t *testing.T) { testCases := []struct { Event string Valid bool }{ {"member-join", true}, {"member-leave", true}, {"member-failed", true}, {"member-update", true}, {"member-reap", true}, {"user", true}, {"User", false}, {"member", false}, {"query", true}, {"Query", false}, {"*", true}, } for _, tc := range testCases { script := EventScript{EventFilter: EventFilter{Event: tc.Event}} if script.Valid() != tc.Valid { t.Errorf("bad: %#v", tc) } } } func TestParseEventScript(t *testing.T) { testCases := []struct { v string err bool results []EventScript }{ { "script.sh", false, []EventScript{{EventFilter{"*", ""}, "script.sh"}}, }, { "member-join=script.sh", false, []EventScript{{EventFilter{"member-join", ""}, "script.sh"}}, }, { "foo,bar=script.sh", false, []EventScript{ {EventFilter{"foo", ""}, "script.sh"}, {EventFilter{"bar", ""}, "script.sh"}, }, }, { "user:deploy=script.sh", false, []EventScript{{EventFilter{"user", "deploy"}, "script.sh"}}, }, { "foo,user:blah,bar,query:tubez=script.sh", false, []EventScript{ {EventFilter{"foo", ""}, "script.sh"}, {EventFilter{"user", "blah"}, "script.sh"}, {EventFilter{"bar", ""}, "script.sh"}, {EventFilter{"query", "tubez"}, "script.sh"}, }, }, { "query:load=script.sh", false, []EventScript{{EventFilter{"query", "load"}, "script.sh"}}, }, { "query=script.sh", false, []EventScript{{EventFilter{"query", ""}, "script.sh"}}, }, } for _, tc := range testCases { results := ParseEventScript(tc.v) if results == nil { t.Errorf("result should not be nil") continue } if len(results) != len(tc.results) { t.Errorf("bad: %#v", results) continue } for i, r := range results { expected := tc.results[i] if r.Event != expected.Event { t.Errorf("Events not equal: %s %s", r.Event, expected.Event) } if r.Name != expected.Name { t.Errorf("User events not equal: %s %s", r.Name, expected.Name) } if r.Script != expected.Script { t.Errorf("Scripts not equal: %s %s", r.Script, expected.Script) } } } } func TestParseEventFilter(t *testing.T) { testCases := []struct { v string results []EventFilter }{ { "", []EventFilter{EventFilter{"*", ""}}, }, { "member-join", []EventFilter{EventFilter{"member-join", ""}}, }, { "member-reap", []EventFilter{EventFilter{"member-reap", ""}}, }, { "foo,bar", []EventFilter{ EventFilter{"foo", ""}, EventFilter{"bar", ""}, }, }, { "user:deploy", []EventFilter{EventFilter{"user", "deploy"}}, }, { "foo,user:blah,bar", []EventFilter{ EventFilter{"foo", ""}, EventFilter{"user", "blah"}, EventFilter{"bar", ""}, }, }, { "query:load", []EventFilter{EventFilter{"query", "load"}}, }, } for _, tc := range testCases { results := ParseEventFilter(tc.v) if results == nil { t.Errorf("result should not be nil") continue } if len(results) != len(tc.results) { t.Errorf("bad: %#v", results) continue } for i, r := range results { expected := tc.results[i] if r.Event != expected.Event { t.Errorf("Events not equal: %s %s", r.Event, expected.Event) } if r.Name != expected.Name { t.Errorf("User events not equal: %s %s", r.Name, expected.Name) } } } } serf-0.6.4/command/agent/flag_slice_value.go000066400000000000000000000006261246721563000210210ustar00rootroot00000000000000package agent import "strings" // AppendSliceValue implements the flag.Value interface and allows multiple // calls to the same variable to append a list. type AppendSliceValue []string func (s *AppendSliceValue) String() string { return strings.Join(*s, ",") } func (s *AppendSliceValue) Set(value string) error { if *s == nil { *s = make([]string, 0, 1) } *s = append(*s, value) return nil } serf-0.6.4/command/agent/flag_slice_value_test.go000066400000000000000000000011071246721563000220530ustar00rootroot00000000000000package agent import ( "flag" "reflect" "testing" ) func TestAppendSliceValue_implements(t *testing.T) { var raw interface{} raw = new(AppendSliceValue) if _, ok := raw.(flag.Value); !ok { t.Fatalf("AppendSliceValue should be a Value") } } func TestAppendSliceValueSet(t *testing.T) { sv := new(AppendSliceValue) err := sv.Set("foo") if err != nil { t.Fatalf("err: %s", err) } err = sv.Set("bar") if err != nil { t.Fatalf("err: %s", err) } expected := []string{"foo", "bar"} if !reflect.DeepEqual([]string(*sv), expected) { t.Fatalf("Bad: %#v", sv) } } serf-0.6.4/command/agent/gated_writer.go000066400000000000000000000013401246721563000202070ustar00rootroot00000000000000package agent import ( "io" "sync" ) // GatedWriter is an io.Writer implementation that buffers all of its // data into an internal buffer until it is told to let data through. type GatedWriter struct { Writer io.Writer buf [][]byte flush bool lock sync.RWMutex } // Flush tells the GatedWriter to flush any buffered data and to stop // buffering. func (w *GatedWriter) Flush() { w.lock.Lock() w.flush = true w.lock.Unlock() for _, p := range w.buf { w.Write(p) } w.buf = nil } func (w *GatedWriter) Write(p []byte) (n int, err error) { w.lock.RLock() defer w.lock.RUnlock() if w.flush { return w.Writer.Write(p) } p2 := make([]byte, len(p)) copy(p2, p) w.buf = append(w.buf, p2) return len(p), nil } serf-0.6.4/command/agent/gated_writer_test.go000066400000000000000000000010361246721563000212500ustar00rootroot00000000000000package agent import ( "bytes" "io" "testing" ) func TestGatedWriter_impl(t *testing.T) { var _ io.Writer = new(GatedWriter) } func TestGatedWriter(t *testing.T) { buf := new(bytes.Buffer) w := &GatedWriter{Writer: buf} w.Write([]byte("foo\n")) w.Write([]byte("bar\n")) if buf.String() != "" { t.Fatalf("bad: %s", buf.String()) } w.Flush() if buf.String() != "foo\nbar\n" { t.Fatalf("bad: %s", buf.String()) } w.Write([]byte("baz\n")) if buf.String() != "foo\nbar\nbaz\n" { t.Fatalf("bad: %s", buf.String()) } } serf-0.6.4/command/agent/invoke.go000066400000000000000000000127361246721563000170350ustar00rootroot00000000000000package agent import ( "fmt" "github.com/armon/circbuf" "github.com/armon/go-metrics" "github.com/hashicorp/serf/serf" "io" "log" "os" "os/exec" "regexp" "runtime" "strings" "time" ) const ( windows = "windows" // maxBufSize limits how much data we collect from a handler. // This is to prevent Serf's memory from growing to an enormous // amount due to a faulty handler. maxBufSize = 8 * 1024 // warnSlow is used to warn about a slow handler invocation warnSlow = time.Second ) var sanitizeTagRegexp = regexp.MustCompile(`[^A-Z0-9_]`) // invokeEventScript will execute the given event script with the given // event. Depending on the event, the semantics of how data are passed // are a bit different. For all events, the SERF_EVENT environmental // variable is the type of the event. For user events, the SERF_USER_EVENT // environmental variable is also set, containing the name of the user // event that was fired. // // In all events, data is passed in via stdin to facilitate piping. See // the various stdin functions below for more information. func invokeEventScript(logger *log.Logger, script string, self serf.Member, event serf.Event) error { defer metrics.MeasureSince([]string{"agent", "invoke", script}, time.Now()) output, _ := circbuf.NewBuffer(maxBufSize) // Determine the shell invocation based on OS var shell, flag string if runtime.GOOS == windows { shell = "cmd" flag = "/C" } else { shell = "/bin/sh" flag = "-c" } cmd := exec.Command(shell, flag, script) cmd.Env = append(os.Environ(), "SERF_EVENT="+event.EventType().String(), "SERF_SELF_NAME="+self.Name, "SERF_SELF_ROLE="+self.Tags["role"], ) cmd.Stderr = output cmd.Stdout = output // Add all the tags for name, val := range self.Tags { //http://stackoverflow.com/questions/2821043/allowed-characters-in-linux-environment-variable-names //(http://pubs.opengroup.org/onlinepubs/000095399/basedefs/xbd_chap08.html for the long version) //says that env var names must be in [A-Z0-9_] and not start with [0-9]. //we only care about the first part, so convert all chars not in [A-Z0-9_] to _ sanitizedName := sanitizeTagRegexp.ReplaceAllString(strings.ToUpper(name), "_") tag_env := fmt.Sprintf("SERF_TAG_%s=%s", sanitizedName, val) cmd.Env = append(cmd.Env, tag_env) } stdin, err := cmd.StdinPipe() if err != nil { return err } switch e := event.(type) { case serf.MemberEvent: go memberEventStdin(logger, stdin, &e) case serf.UserEvent: cmd.Env = append(cmd.Env, "SERF_USER_EVENT="+e.Name) cmd.Env = append(cmd.Env, fmt.Sprintf("SERF_USER_LTIME=%d", e.LTime)) go streamPayload(logger, stdin, e.Payload) case *serf.Query: cmd.Env = append(cmd.Env, "SERF_QUERY_NAME="+e.Name) cmd.Env = append(cmd.Env, fmt.Sprintf("SERF_QUERY_LTIME=%d", e.LTime)) go streamPayload(logger, stdin, e.Payload) default: return fmt.Errorf("Unknown event type: %s", event.EventType().String()) } // Start a timer to warn about slow handlers slowTimer := time.AfterFunc(warnSlow, func() { logger.Printf("[WARN] agent: Script '%s' slow, execution exceeding %v", script, warnSlow) }) if err := cmd.Start(); err != nil { return err } // Warn if buffer is overritten if output.TotalWritten() > output.Size() { logger.Printf("[WARN] agent: Script '%s' generated %d bytes of output, truncated to %d", script, output.TotalWritten(), output.Size()) } err = cmd.Wait() slowTimer.Stop() logger.Printf("[DEBUG] agent: Event '%s' script output: %s", event.EventType().String(), output.String()) if err != nil { return err } // If this is a query and we have output, respond if query, ok := event.(*serf.Query); ok && output.TotalWritten() > 0 { if err := query.Respond(output.Bytes()); err != nil { logger.Printf("[WARN] agent: Failed to respond to query '%s': %s", event.String(), err) } } return nil } // eventClean cleans a value to be a parameter in an event line. func eventClean(v string) string { v = strings.Replace(v, "\t", "\\t", -1) v = strings.Replace(v, "\n", "\\n", -1) return v } // Sends data on stdin for a member event. // // The format for the data is unix tool friendly, separated by whitespace // and newlines. The structure of each line for any member event is: // "NAME ADDRESS ROLE TAGS" where the whitespace is actually tabs. // The name and role are cleaned so that newlines and tabs are replaced // with "\n" and "\t" respectively. func memberEventStdin(logger *log.Logger, stdin io.WriteCloser, e *serf.MemberEvent) { defer stdin.Close() for _, member := range e.Members { // Format the tags as tag1=v1,tag2=v2,... var tagPairs []string for name, value := range member.Tags { tagPairs = append(tagPairs, fmt.Sprintf("%s=%s", name, value)) } tags := strings.Join(tagPairs, ",") // Send the entire line _, err := stdin.Write([]byte(fmt.Sprintf( "%s\t%s\t%s\t%s\n", eventClean(member.Name), member.Addr.String(), eventClean(member.Tags["role"]), eventClean(tags)))) if err != nil { return } } } // Sends data on stdin for an event. The stdin simply contains the // payload (if any). // Most shells read implementations need a newline, force it to be there func streamPayload(logger *log.Logger, stdin io.WriteCloser, buf []byte) { defer stdin.Close() // Append a newline to payload if missing payload := buf if len(payload) > 0 && payload[len(payload)-1] != '\n' { payload = append(payload, '\n') } if _, err := stdin.Write(payload); err != nil { logger.Printf("[ERR] Error writing payload: %s", err) return } } serf-0.6.4/command/agent/ipc.go000066400000000000000000000565331246721563000163200ustar00rootroot00000000000000package agent /* The agent exposes an IPC mechanism that is used for both controlling Serf as well as providing a fast streaming mechanism for events. This allows other applications to easily leverage Serf as the event layer. We additionally make use of the IPC layer to also handle RPC calls from the CLI to unify the code paths. This results in a split Request/Response as well as streaming mode of operation. The system is fairly simple, each client opens a TCP connection to the agent. The connection is initialized with a handshake which establishes the protocol version being used. This is to allow for future changes to the protocol. Once initialized, clients send commands and wait for responses. Certain commands will cause the client to subscribe to events, and those will be pushed down the socket as they are received. This provides a low-latency mechanism for applications to send and receive events, while also providing a flexible control mechanism for Serf. */ import ( "bufio" "fmt" "github.com/armon/go-metrics" "github.com/hashicorp/go-msgpack/codec" "github.com/hashicorp/logutils" "github.com/hashicorp/serf/serf" "io" "log" "net" "os" "regexp" "strings" "sync" "sync/atomic" "time" ) const ( MinIPCVersion = 1 MaxIPCVersion = 1 ) const ( handshakeCommand = "handshake" eventCommand = "event" forceLeaveCommand = "force-leave" joinCommand = "join" membersCommand = "members" membersFilteredCommand = "members-filtered" streamCommand = "stream" stopCommand = "stop" monitorCommand = "monitor" leaveCommand = "leave" installKeyCommand = "install-key" useKeyCommand = "use-key" removeKeyCommand = "remove-key" listKeysCommand = "list-keys" tagsCommand = "tags" queryCommand = "query" respondCommand = "respond" authCommand = "auth" statsCommand = "stats" ) const ( unsupportedCommand = "Unsupported command" unsupportedIPCVersion = "Unsupported IPC version" duplicateHandshake = "Handshake already performed" handshakeRequired = "Handshake required" monitorExists = "Monitor already exists" invalidFilter = "Invalid event filter" streamExists = "Stream with given sequence exists" invalidQueryID = "No pending queries matching ID" authRequired = "Authentication required" invalidAuthToken = "Invalid authentication token" ) const ( queryRecordAck = "ack" queryRecordResponse = "response" queryRecordDone = "done" ) // Request header is sent before each request type requestHeader struct { Command string Seq uint64 } // Response header is sent before each response type responseHeader struct { Seq uint64 Error string } type handshakeRequest struct { Version int32 } type authRequest struct { AuthKey string } type eventRequest struct { Name string Payload []byte Coalesce bool } type forceLeaveRequest struct { Node string } type joinRequest struct { Existing []string Replay bool } type joinResponse struct { Num int32 } type membersFilteredRequest struct { Tags map[string]string Status string Name string } type membersResponse struct { Members []Member } type keyRequest struct { Key string } type keyResponse struct { Messages map[string]string Keys map[string]int NumNodes int NumErr int NumResp int } type monitorRequest struct { LogLevel string } type streamRequest struct { Type string } type stopRequest struct { Stop uint64 } type tagsRequest struct { Tags map[string]string DeleteTags []string } type queryRequest struct { FilterNodes []string FilterTags map[string]string RequestAck bool Timeout time.Duration Name string Payload []byte } type respondRequest struct { ID uint64 Payload []byte } type queryRecord struct { Type string From string Payload []byte } type logRecord struct { Log string } type userEventRecord struct { Event string LTime serf.LamportTime Name string Payload []byte Coalesce bool } type queryEventRecord struct { Event string ID uint64 // ID is opaque to client, used to respond LTime serf.LamportTime Name string Payload []byte } type Member struct { Name string Addr net.IP Port uint16 Tags map[string]string Status string ProtocolMin uint8 ProtocolMax uint8 ProtocolCur uint8 DelegateMin uint8 DelegateMax uint8 DelegateCur uint8 } type memberEventRecord struct { Event string Members []Member } type AgentIPC struct { sync.Mutex agent *Agent authKey string clients map[string]*IPCClient listener net.Listener logger *log.Logger logWriter *logWriter stop bool stopCh chan struct{} } type IPCClient struct { queryID uint64 // Used to increment query IDs name string conn net.Conn reader *bufio.Reader writer *bufio.Writer dec *codec.Decoder enc *codec.Encoder writeLock sync.Mutex version int32 // From the handshake, 0 before logStreamer *logStream eventStreams map[uint64]*eventStream pendingQueries map[uint64]*serf.Query queryLock sync.Mutex didAuth bool // Did we get an auth token yet? } // send is used to send an object using the MsgPack encoding. send // is serialized to prevent write overlaps, while properly buffering. func (c *IPCClient) Send(header *responseHeader, obj interface{}) error { c.writeLock.Lock() defer c.writeLock.Unlock() if err := c.enc.Encode(header); err != nil { return err } if obj != nil { if err := c.enc.Encode(obj); err != nil { return err } } if err := c.writer.Flush(); err != nil { return err } return nil } func (c *IPCClient) String() string { return fmt.Sprintf("ipc.client: %v", c.conn.RemoteAddr()) } // nextQueryID safely generates a new query ID func (c *IPCClient) nextQueryID() uint64 { return atomic.AddUint64(&c.queryID, 1) } // RegisterQuery is used to register a pending query that may // get a response. The ID of the query is returned func (c *IPCClient) RegisterQuery(q *serf.Query) uint64 { // Generate a unique-per-client ID id := c.nextQueryID() // Ensure the query deadline is in the future timeout := q.Deadline().Sub(time.Now()) if timeout < 0 { return id } // Register the query c.queryLock.Lock() c.pendingQueries[id] = q c.queryLock.Unlock() // Setup a timer to deregister after the timeout time.AfterFunc(timeout, func() { c.queryLock.Lock() delete(c.pendingQueries, id) c.queryLock.Unlock() }) return id } // NewAgentIPC is used to create a new Agent IPC handler func NewAgentIPC(agent *Agent, authKey string, listener net.Listener, logOutput io.Writer, logWriter *logWriter) *AgentIPC { if logOutput == nil { logOutput = os.Stderr } ipc := &AgentIPC{ agent: agent, authKey: authKey, clients: make(map[string]*IPCClient), listener: listener, logger: log.New(logOutput, "", log.LstdFlags), logWriter: logWriter, stopCh: make(chan struct{}), } go ipc.listen() return ipc } // Shutdown is used to shutdown the IPC layer func (i *AgentIPC) Shutdown() { i.Lock() defer i.Unlock() if i.stop { return } i.stop = true close(i.stopCh) i.listener.Close() // Close the existing connections for _, client := range i.clients { client.conn.Close() } } // listen is a long running routine that listens for new clients func (i *AgentIPC) listen() { for { conn, err := i.listener.Accept() if err != nil { if i.stop { return } i.logger.Printf("[ERR] agent.ipc: Failed to accept client: %v", err) continue } i.logger.Printf("[INFO] agent.ipc: Accepted client: %v", conn.RemoteAddr()) metrics.IncrCounter([]string{"agent", "ipc", "accept"}, 1) // Wrap the connection in a client client := &IPCClient{ name: conn.RemoteAddr().String(), conn: conn, reader: bufio.NewReader(conn), writer: bufio.NewWriter(conn), eventStreams: make(map[uint64]*eventStream), pendingQueries: make(map[uint64]*serf.Query), } client.dec = codec.NewDecoder(client.reader, &codec.MsgpackHandle{RawToString: true, WriteExt: true}) client.enc = codec.NewEncoder(client.writer, &codec.MsgpackHandle{RawToString: true, WriteExt: true}) if err != nil { i.logger.Printf("[ERR] agent.ipc: Failed to create decoder: %v", err) conn.Close() continue } // Register the client i.Lock() if !i.stop { i.clients[client.name] = client go i.handleClient(client) } else { conn.Close() } i.Unlock() } } // deregisterClient is called to cleanup after a client disconnects func (i *AgentIPC) deregisterClient(client *IPCClient) { // Close the socket client.conn.Close() // Remove from the clients list i.Lock() delete(i.clients, client.name) i.Unlock() // Remove from the log writer if client.logStreamer != nil { i.logWriter.DeregisterHandler(client.logStreamer) client.logStreamer.Stop() } // Remove from event handlers for _, es := range client.eventStreams { i.agent.DeregisterEventHandler(es) es.Stop() } } // handleClient is a long running routine that handles a single client func (i *AgentIPC) handleClient(client *IPCClient) { defer i.deregisterClient(client) var reqHeader requestHeader for { // Decode the header if err := client.dec.Decode(&reqHeader); err != nil { if !i.stop { // The second part of this if is to block socket // errors from Windows which appear to happen every // time there is an EOF. if err != io.EOF && !strings.Contains(err.Error(), "WSARecv") { i.logger.Printf("[ERR] agent.ipc: failed to decode request header: %v", err) } } return } // Evaluate the command if err := i.handleRequest(client, &reqHeader); err != nil { i.logger.Printf("[ERR] agent.ipc: Failed to evaluate request: %v", err) return } } } // handleRequest is used to evaluate a single client command func (i *AgentIPC) handleRequest(client *IPCClient, reqHeader *requestHeader) error { // Look for a command field command := reqHeader.Command seq := reqHeader.Seq // Ensure the handshake is performed before other commands if command != handshakeCommand && client.version == 0 { respHeader := responseHeader{Seq: seq, Error: handshakeRequired} client.Send(&respHeader, nil) return fmt.Errorf(handshakeRequired) } metrics.IncrCounter([]string{"agent", "ipc", "command"}, 1) // Ensure the client has authenticated after the handshake if necessary if i.authKey != "" && !client.didAuth && command != authCommand && command != handshakeCommand { i.logger.Printf("[WARN] agent.ipc: Client sending commands before auth") respHeader := responseHeader{Seq: seq, Error: authRequired} client.Send(&respHeader, nil) return nil } // Dispatch command specific handlers switch command { case handshakeCommand: return i.handleHandshake(client, seq) case authCommand: return i.handleAuth(client, seq) case eventCommand: return i.handleEvent(client, seq) case membersCommand, membersFilteredCommand: return i.handleMembers(client, command, seq) case streamCommand: return i.handleStream(client, seq) case monitorCommand: return i.handleMonitor(client, seq) case stopCommand: return i.handleStop(client, seq) case forceLeaveCommand: return i.handleForceLeave(client, seq) case joinCommand: return i.handleJoin(client, seq) case leaveCommand: return i.handleLeave(client, seq) case installKeyCommand: return i.handleInstallKey(client, seq) case useKeyCommand: return i.handleUseKey(client, seq) case removeKeyCommand: return i.handleRemoveKey(client, seq) case listKeysCommand: return i.handleListKeys(client, seq) case tagsCommand: return i.handleTags(client, seq) case queryCommand: return i.handleQuery(client, seq) case respondCommand: return i.handleRespond(client, seq) case statsCommand: return i.handleStats(client, seq) default: respHeader := responseHeader{Seq: seq, Error: unsupportedCommand} client.Send(&respHeader, nil) return fmt.Errorf("command '%s' not recognized", command) } } func (i *AgentIPC) handleHandshake(client *IPCClient, seq uint64) error { var req handshakeRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } resp := responseHeader{ Seq: seq, Error: "", } // Check the version if req.Version < MinIPCVersion || req.Version > MaxIPCVersion { resp.Error = unsupportedIPCVersion } else if client.version != 0 { resp.Error = duplicateHandshake } else { client.version = req.Version } return client.Send(&resp, nil) } func (i *AgentIPC) handleAuth(client *IPCClient, seq uint64) error { var req authRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } resp := responseHeader{ Seq: seq, Error: "", } // Check the token matches if req.AuthKey == i.authKey { client.didAuth = true } else { resp.Error = invalidAuthToken } return client.Send(&resp, nil) } func (i *AgentIPC) handleEvent(client *IPCClient, seq uint64) error { var req eventRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Attempt the send err := i.agent.UserEvent(req.Name, req.Payload, req.Coalesce) // Respond resp := responseHeader{ Seq: seq, Error: errToString(err), } return client.Send(&resp, nil) } func (i *AgentIPC) handleForceLeave(client *IPCClient, seq uint64) error { var req forceLeaveRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Attempt leave err := i.agent.ForceLeave(req.Node) // Respond resp := responseHeader{ Seq: seq, Error: errToString(err), } return client.Send(&resp, nil) } func (i *AgentIPC) handleJoin(client *IPCClient, seq uint64) error { var req joinRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Attempt the join num, err := i.agent.Join(req.Existing, req.Replay) // Respond header := responseHeader{ Seq: seq, Error: errToString(err), } resp := joinResponse{ Num: int32(num), } return client.Send(&header, &resp) } func (i *AgentIPC) handleMembers(client *IPCClient, command string, seq uint64) error { serf := i.agent.Serf() raw := serf.Members() members := make([]Member, 0, len(raw)) if command == membersFilteredCommand { var req membersFilteredRequest err := client.dec.Decode(&req) if err != nil { return fmt.Errorf("decode failed: %v", err) } raw, err = i.filterMembers(raw, req.Tags, req.Status, req.Name) if err != nil { return err } } for _, m := range raw { sm := Member{ Name: m.Name, Addr: m.Addr, Port: m.Port, Tags: m.Tags, Status: m.Status.String(), ProtocolMin: m.ProtocolMin, ProtocolMax: m.ProtocolMax, ProtocolCur: m.ProtocolCur, DelegateMin: m.DelegateMin, DelegateMax: m.DelegateMax, DelegateCur: m.DelegateCur, } members = append(members, sm) } header := responseHeader{ Seq: seq, Error: "", } resp := membersResponse{ Members: members, } return client.Send(&header, &resp) } func (i *AgentIPC) filterMembers(members []serf.Member, tags map[string]string, status string, name string) ([]serf.Member, error) { result := make([]serf.Member, 0, len(members)) // Pre-compile all the regular expressions tagsRe := make(map[string]*regexp.Regexp) for tag, expr := range tags { re, err := regexp.Compile(fmt.Sprintf("^%s$", expr)) if err != nil { return nil, fmt.Errorf("Failed to compile regex: %v", err) } tagsRe[tag] = re } statusRe, err := regexp.Compile(fmt.Sprintf("^%s$", status)) if err != nil { return nil, fmt.Errorf("Failed to compile regex: %v", err) } nameRe, err := regexp.Compile(fmt.Sprintf("^%s$", name)) if err != nil { return nil, fmt.Errorf("Failed to compile regex: %v", err) } OUTER: for _, m := range members { // Check if tags were passed, and if they match for tag := range tags { if !tagsRe[tag].MatchString(m.Tags[tag]) { continue OUTER } } // Check if status matches if status != "" && !statusRe.MatchString(m.Status.String()) { continue } // Check if node name matches if name != "" && !nameRe.MatchString(m.Name) { continue } // Made it past the filters! result = append(result, m) } return result, nil } func (i *AgentIPC) handleInstallKey(client *IPCClient, seq uint64) error { var req keyRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } queryResp, err := i.agent.InstallKey(req.Key) header := responseHeader{ Seq: seq, Error: errToString(err), } resp := keyResponse{ Messages: queryResp.Messages, NumNodes: queryResp.NumNodes, NumErr: queryResp.NumErr, NumResp: queryResp.NumResp, } return client.Send(&header, &resp) } func (i *AgentIPC) handleUseKey(client *IPCClient, seq uint64) error { var req keyRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } queryResp, err := i.agent.UseKey(req.Key) header := responseHeader{ Seq: seq, Error: errToString(err), } resp := keyResponse{ Messages: queryResp.Messages, NumNodes: queryResp.NumNodes, NumErr: queryResp.NumErr, NumResp: queryResp.NumResp, } return client.Send(&header, &resp) } func (i *AgentIPC) handleRemoveKey(client *IPCClient, seq uint64) error { var req keyRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } queryResp, err := i.agent.RemoveKey(req.Key) header := responseHeader{ Seq: seq, Error: errToString(err), } resp := keyResponse{ Messages: queryResp.Messages, NumNodes: queryResp.NumNodes, NumErr: queryResp.NumErr, NumResp: queryResp.NumResp, } return client.Send(&header, &resp) } func (i *AgentIPC) handleListKeys(client *IPCClient, seq uint64) error { queryResp, err := i.agent.ListKeys() header := responseHeader{ Seq: seq, Error: errToString(err), } resp := keyResponse{ Messages: queryResp.Messages, Keys: queryResp.Keys, NumNodes: queryResp.NumNodes, NumErr: queryResp.NumErr, NumResp: queryResp.NumResp, } return client.Send(&header, &resp) } func (i *AgentIPC) handleStream(client *IPCClient, seq uint64) error { var es *eventStream var req streamRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } resp := responseHeader{ Seq: seq, Error: "", } // Create the event filters filters := ParseEventFilter(req.Type) for _, f := range filters { if !f.Valid() { resp.Error = invalidFilter goto SEND } } // Check if there is an existing stream if _, ok := client.eventStreams[seq]; ok { resp.Error = streamExists goto SEND } // Create an event streamer es = newEventStream(client, filters, seq, i.logger) client.eventStreams[seq] = es // Register with the agent. Defer so that we can respond before // registration, avoids any possible race condition defer i.agent.RegisterEventHandler(es) SEND: return client.Send(&resp, nil) } func (i *AgentIPC) handleMonitor(client *IPCClient, seq uint64) error { var req monitorRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } resp := responseHeader{ Seq: seq, Error: "", } // Upper case the log level req.LogLevel = strings.ToUpper(req.LogLevel) // Create a level filter filter := LevelFilter() filter.MinLevel = logutils.LogLevel(req.LogLevel) if !ValidateLevelFilter(filter.MinLevel, filter) { resp.Error = fmt.Sprintf("Unknown log level: %s", filter.MinLevel) goto SEND } // Check if there is an existing monitor if client.logStreamer != nil { resp.Error = monitorExists goto SEND } // Create a log streamer client.logStreamer = newLogStream(client, filter, seq, i.logger) // Register with the log writer. Defer so that we can respond before // registration, avoids any possible race condition defer i.logWriter.RegisterHandler(client.logStreamer) SEND: return client.Send(&resp, nil) } func (i *AgentIPC) handleStop(client *IPCClient, seq uint64) error { var req stopRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Remove a log monitor if any if client.logStreamer != nil && client.logStreamer.seq == req.Stop { i.logWriter.DeregisterHandler(client.logStreamer) client.logStreamer.Stop() client.logStreamer = nil } // Remove an event stream if any if es, ok := client.eventStreams[req.Stop]; ok { i.agent.DeregisterEventHandler(es) es.Stop() delete(client.eventStreams, req.Stop) } // Always succeed resp := responseHeader{Seq: seq, Error: ""} return client.Send(&resp, nil) } func (i *AgentIPC) handleLeave(client *IPCClient, seq uint64) error { i.logger.Printf("[INFO] agent.ipc: Graceful leave triggered") // Do the leave err := i.agent.Leave() if err != nil { i.logger.Printf("[ERR] agent.ipc: leave failed: %v", err) } resp := responseHeader{Seq: seq, Error: errToString(err)} // Send and wait err = client.Send(&resp, nil) // Trigger a shutdown! if err := i.agent.Shutdown(); err != nil { i.logger.Printf("[ERR] agent.ipc: shutdown failed: %v", err) } return err } func (i *AgentIPC) handleTags(client *IPCClient, seq uint64) error { var req tagsRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } tags := make(map[string]string) for key, val := range i.agent.SerfConfig().Tags { var delTag bool for _, delkey := range req.DeleteTags { delTag = (delTag || delkey == key) } if !delTag { tags[key] = val } } for key, val := range req.Tags { tags[key] = val } err := i.agent.SetTags(tags) resp := responseHeader{Seq: seq, Error: errToString(err)} return client.Send(&resp, nil) } func (i *AgentIPC) handleQuery(client *IPCClient, seq uint64) error { var req queryRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Setup the query params := serf.QueryParam{ FilterNodes: req.FilterNodes, FilterTags: req.FilterTags, RequestAck: req.RequestAck, Timeout: req.Timeout, } // Start the query queryResp, err := i.agent.Query(req.Name, req.Payload, ¶ms) // Stream the query responses if err == nil { qs := newQueryResponseStream(client, seq, i.logger) defer func() { go qs.Stream(queryResp) }() } // Respond resp := responseHeader{ Seq: seq, Error: errToString(err), } return client.Send(&resp, nil) } func (i *AgentIPC) handleRespond(client *IPCClient, seq uint64) error { var req respondRequest if err := client.dec.Decode(&req); err != nil { return fmt.Errorf("decode failed: %v", err) } // Lookup the query client.queryLock.Lock() query, ok := client.pendingQueries[req.ID] client.queryLock.Unlock() // Respond if we have a pending query var err error if ok { err = query.Respond(req.Payload) } else { err = fmt.Errorf(invalidQueryID) } // Respond resp := responseHeader{ Seq: seq, Error: errToString(err), } return client.Send(&resp, nil) } // handleStats is used to get various statistics func (i *AgentIPC) handleStats(client *IPCClient, seq uint64) error { header := responseHeader{ Seq: seq, Error: "", } resp := i.agent.Stats() return client.Send(&header, resp) } // Used to convert an error to a string representation func errToString(err error) string { if err == nil { return "" } return err.Error() } serf-0.6.4/command/agent/ipc_event_stream.go000066400000000000000000000057071246721563000210710ustar00rootroot00000000000000package agent import ( "fmt" "github.com/hashicorp/serf/serf" "log" ) type streamClient interface { Send(*responseHeader, interface{}) error RegisterQuery(*serf.Query) uint64 } // eventStream is used to stream events to a client over IPC type eventStream struct { client streamClient eventCh chan serf.Event filters []EventFilter logger *log.Logger seq uint64 } func newEventStream(client streamClient, filters []EventFilter, seq uint64, logger *log.Logger) *eventStream { es := &eventStream{ client: client, eventCh: make(chan serf.Event, 512), filters: filters, logger: logger, seq: seq, } go es.stream() return es } func (es *eventStream) HandleEvent(e serf.Event) { // Check the event for _, f := range es.filters { if f.Invoke(e) { goto HANDLE } } return // Do a non-blocking send HANDLE: select { case es.eventCh <- e: default: es.logger.Printf("[WARN] agent.ipc: Dropping event to %v", es.client) } } func (es *eventStream) Stop() { close(es.eventCh) } func (es *eventStream) stream() { var err error for event := range es.eventCh { switch e := event.(type) { case serf.MemberEvent: err = es.sendMemberEvent(e) case serf.UserEvent: err = es.sendUserEvent(e) case *serf.Query: err = es.sendQuery(e) default: err = fmt.Errorf("Unknown event type: %s", event.EventType().String()) } if err != nil { es.logger.Printf("[ERR] agent.ipc: Failed to stream event to %v: %v", es.client, err) return } } } // sendMemberEvent is used to send a single member event func (es *eventStream) sendMemberEvent(me serf.MemberEvent) error { members := make([]Member, 0, len(me.Members)) for _, m := range me.Members { sm := Member{ Name: m.Name, Addr: m.Addr, Port: m.Port, Tags: m.Tags, Status: m.Status.String(), ProtocolMin: m.ProtocolMin, ProtocolMax: m.ProtocolMax, ProtocolCur: m.ProtocolCur, DelegateMin: m.DelegateMin, DelegateMax: m.DelegateMax, DelegateCur: m.DelegateCur, } members = append(members, sm) } header := responseHeader{ Seq: es.seq, Error: "", } rec := memberEventRecord{ Event: me.String(), Members: members, } return es.client.Send(&header, &rec) } // sendUserEvent is used to send a single user event func (es *eventStream) sendUserEvent(ue serf.UserEvent) error { header := responseHeader{ Seq: es.seq, Error: "", } rec := userEventRecord{ Event: ue.EventType().String(), LTime: ue.LTime, Name: ue.Name, Payload: ue.Payload, Coalesce: ue.Coalesce, } return es.client.Send(&header, &rec) } // sendQuery is used to send a single query event func (es *eventStream) sendQuery(q *serf.Query) error { id := es.client.RegisterQuery(q) header := responseHeader{ Seq: es.seq, Error: "", } rec := queryEventRecord{ Event: q.EventType().String(), ID: id, LTime: q.LTime, Name: q.Name, Payload: q.Payload, } return es.client.Send(&header, &rec) } serf-0.6.4/command/agent/ipc_event_stream_test.go000066400000000000000000000063221246721563000221220ustar00rootroot00000000000000package agent import ( "bytes" "github.com/hashicorp/serf/serf" "log" "net" "os" "testing" "time" ) type MockStreamClient struct { headers []*responseHeader objs []interface{} err error } func (m *MockStreamClient) Send(h *responseHeader, o interface{}) error { m.headers = append(m.headers, h) m.objs = append(m.objs, o) return m.err } func (m *MockStreamClient) RegisterQuery(q *serf.Query) uint64 { return 42 } func TestIPCEventStream(t *testing.T) { sc := &MockStreamClient{} filters := ParseEventFilter("user:foobar,member-join,query:deploy") es := newEventStream(sc, filters, 42, log.New(os.Stderr, "", log.LstdFlags)) defer es.Stop() es.HandleEvent(serf.UserEvent{ LTime: 123, Name: "foobar", Payload: []byte("test"), Coalesce: true, }) es.HandleEvent(serf.UserEvent{ LTime: 124, Name: "ignore", Payload: []byte("test"), Coalesce: true, }) es.HandleEvent(serf.MemberEvent{ Type: serf.EventMemberJoin, Members: []serf.Member{ serf.Member{ Name: "TestNode", Addr: net.IP([]byte{127, 0, 0, 1}), Port: 12345, Tags: map[string]string{"role": "node"}, Status: serf.StatusAlive, ProtocolMin: 0, ProtocolMax: 0, ProtocolCur: 0, DelegateMin: 0, DelegateMax: 0, DelegateCur: 0, }, }, }) es.HandleEvent(&serf.Query{ LTime: 125, Name: "deploy", Payload: []byte("test"), }) time.Sleep(5 * time.Millisecond) if len(sc.headers) != 3 { t.Fatalf("expected 2 messages!") } for _, h := range sc.headers { if h.Seq != 42 { t.Fatalf("bad seq") } if h.Error != "" { t.Fatalf("bad err") } } obj1 := sc.objs[0].(*userEventRecord) if obj1.Event != "user" { t.Fatalf("bad event: %#v", obj1) } if obj1.LTime != 123 { t.Fatalf("bad event: %#v", obj1) } if obj1.Name != "foobar" { t.Fatalf("bad event: %#v", obj1) } if bytes.Compare(obj1.Payload, []byte("test")) != 0 { t.Fatalf("bad event: %#v", obj1) } if !obj1.Coalesce { t.Fatalf("bad event: %#v", obj1) } obj2 := sc.objs[1].(*memberEventRecord) if obj2.Event != "member-join" { t.Fatalf("bad event: %#v", obj2) } mem1 := obj2.Members[0] if mem1.Name != "TestNode" { t.Fatalf("bad member: %#v", mem1) } if bytes.Compare(mem1.Addr, []byte{127, 0, 0, 1}) != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.Port != 12345 { t.Fatalf("bad member: %#v", mem1) } if mem1.Status != "alive" { t.Fatalf("bad member: %#v", mem1) } if mem1.ProtocolMin != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.ProtocolMax != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.ProtocolCur != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.DelegateMin != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.DelegateMax != 0 { t.Fatalf("bad member: %#v", mem1) } if mem1.DelegateCur != 0 { t.Fatalf("bad member: %#v", mem1) } obj3 := sc.objs[2].(*queryEventRecord) if obj3.Event != "query" { t.Fatalf("bad query: %#v", obj3) } if obj3.ID != 42 { t.Fatalf("bad query: %#v", obj3) } if obj3.LTime != 125 { t.Fatalf("bad query: %#v", obj3) } if obj3.Name != "deploy" { t.Fatalf("bad query: %#v", obj3) } if bytes.Compare(obj3.Payload, []byte("test")) != 0 { t.Fatalf("bad query: %#v", obj3) } } serf-0.6.4/command/agent/ipc_log_stream.go000066400000000000000000000025551246721563000205270ustar00rootroot00000000000000package agent import ( "github.com/hashicorp/logutils" "log" ) // logStream is used to stream logs to a client over IPC type logStream struct { client streamClient filter *logutils.LevelFilter logCh chan string logger *log.Logger seq uint64 } func newLogStream(client streamClient, filter *logutils.LevelFilter, seq uint64, logger *log.Logger) *logStream { ls := &logStream{ client: client, filter: filter, logCh: make(chan string, 512), logger: logger, seq: seq, } go ls.stream() return ls } func (ls *logStream) HandleLog(l string) { // Check the log level if !ls.filter.Check([]byte(l)) { return } // Do a non-blocking send select { case ls.logCh <- l: default: // We can't log syncronously, since we are already being invoked // from the logWriter, and a log will need to invoke Write() which // already holds the lock. We must therefor do the log async, so // as to not deadlock go ls.logger.Printf("[WARN] agent.ipc: Dropping logs to %v", ls.client) } } func (ls *logStream) Stop() { close(ls.logCh) } func (ls *logStream) stream() { header := responseHeader{Seq: ls.seq, Error: ""} rec := logRecord{Log: ""} for line := range ls.logCh { rec.Log = line if err := ls.client.Send(&header, &rec); err != nil { ls.logger.Printf("[ERR] agent.ipc: Failed to stream log to %v: %v", ls.client, err) return } } } serf-0.6.4/command/agent/ipc_log_stream_test.go000066400000000000000000000013661246721563000215650ustar00rootroot00000000000000package agent import ( "github.com/hashicorp/logutils" "log" "os" "testing" "time" ) func TestIPCLogStream(t *testing.T) { sc := &MockStreamClient{} filter := LevelFilter() filter.MinLevel = logutils.LogLevel("INFO") ls := newLogStream(sc, filter, 42, log.New(os.Stderr, "", log.LstdFlags)) defer ls.Stop() log := "[DEBUG] this is a test log" log2 := "[INFO] This should pass" ls.HandleLog(log) ls.HandleLog(log2) time.Sleep(5 * time.Millisecond) if len(sc.headers) != 1 { t.Fatalf("expected 1 messages!") } for _, h := range sc.headers { if h.Seq != 42 { t.Fatalf("bad seq") } if h.Error != "" { t.Fatalf("bad err") } } obj1 := sc.objs[0].(*logRecord) if obj1.Log != log2 { t.Fatalf("bad event %#v", obj1) } } serf-0.6.4/command/agent/ipc_query_response_stream.go000066400000000000000000000041451246721563000230260ustar00rootroot00000000000000package agent import ( "github.com/hashicorp/serf/serf" "log" "time" ) // queryResponseStream is used to stream the query results back to a client type queryResponseStream struct { client streamClient logger *log.Logger seq uint64 } func newQueryResponseStream(client streamClient, seq uint64, logger *log.Logger) *queryResponseStream { qs := &queryResponseStream{ client: client, logger: logger, seq: seq, } return qs } // Stream is a long running routine used to stream the results of a query back to a client func (qs *queryResponseStream) Stream(resp *serf.QueryResponse) { // Setup a timer for the query ending remaining := resp.Deadline().Sub(time.Now()) done := time.After(remaining) ackCh := resp.AckCh() respCh := resp.ResponseCh() for { select { case a := <-ackCh: if err := qs.sendAck(a); err != nil { qs.logger.Printf("[ERR] agent.ipc: Failed to stream ack to %v: %v", qs.client, err) return } case r := <-respCh: if err := qs.sendResponse(r.From, r.Payload); err != nil { qs.logger.Printf("[ERR] agent.ipc: Failed to stream response to %v: %v", qs.client, err) return } case <-done: if err := qs.sendDone(); err != nil { qs.logger.Printf("[ERR] agent.ipc: Failed to stream query end to %v: %v", qs.client, err) } return } } } // sendAck is used to send a single ack func (qs *queryResponseStream) sendAck(from string) error { header := responseHeader{ Seq: qs.seq, Error: "", } rec := queryRecord{ Type: queryRecordAck, From: from, } return qs.client.Send(&header, &rec) } // sendResponse is used to send a single response func (qs *queryResponseStream) sendResponse(from string, payload []byte) error { header := responseHeader{ Seq: qs.seq, Error: "", } rec := queryRecord{ Type: queryRecordResponse, From: from, Payload: payload, } return qs.client.Send(&header, &rec) } // sendDone is used to signal the end func (qs *queryResponseStream) sendDone() error { header := responseHeader{ Seq: qs.seq, Error: "", } rec := queryRecord{ Type: queryRecordDone, } return qs.client.Send(&header, &rec) } serf-0.6.4/command/agent/log_levels.go000066400000000000000000000012031246721563000176600ustar00rootroot00000000000000package agent import ( "github.com/hashicorp/logutils" "io/ioutil" ) // LevelFilter returns a LevelFilter that is configured with the log // levels that we use. func LevelFilter() *logutils.LevelFilter { return &logutils.LevelFilter{ Levels: []logutils.LogLevel{"TRACE", "DEBUG", "INFO", "WARN", "ERR"}, MinLevel: "INFO", Writer: ioutil.Discard, } } // ValidateLevelFilter verifies that the log levels within the filter // are valid. func ValidateLevelFilter(minLevel logutils.LogLevel, filter *logutils.LevelFilter) bool { for _, level := range filter.Levels { if level == minLevel { return true } } return false } serf-0.6.4/command/agent/log_writer.go000066400000000000000000000034411246721563000177100ustar00rootroot00000000000000package agent import ( "sync" ) // LogHandler interface is used for clients that want to subscribe // to logs, for example to stream them over an IPC mechanism type LogHandler interface { HandleLog(string) } // logWriter implements io.Writer so it can be used as a log sink. // It maintains a circular buffer of logs, and a set of handlers to // which it can stream the logs to. type logWriter struct { sync.Mutex logs []string index int handlers map[LogHandler]struct{} } // NewLogWriter creates a logWriter with the given buffer capacity func NewLogWriter(buf int) *logWriter { return &logWriter{ logs: make([]string, buf), index: 0, handlers: make(map[LogHandler]struct{}), } } // RegisterHandler adds a log handler to receive logs, and sends // the last buffered logs to the handler func (l *logWriter) RegisterHandler(lh LogHandler) { l.Lock() defer l.Unlock() // Do nothing if already registered if _, ok := l.handlers[lh]; ok { return } // Register l.handlers[lh] = struct{}{} // Send the old logs if l.logs[l.index] != "" { for i := l.index; i < len(l.logs); i++ { lh.HandleLog(l.logs[i]) } } for i := 0; i < l.index; i++ { lh.HandleLog(l.logs[i]) } } // DeregisterHandler removes a LogHandler and prevents more invocations func (l *logWriter) DeregisterHandler(lh LogHandler) { l.Lock() defer l.Unlock() delete(l.handlers, lh) } // Write is used to accumulate new logs func (l *logWriter) Write(p []byte) (n int, err error) { l.Lock() defer l.Unlock() // Strip off newlines at the end if there are any since we store // individual log lines in the agent. n = len(p) if p[n-1] == '\n' { p = p[:n-1] } l.logs[l.index] = string(p) l.index = (l.index + 1) % len(l.logs) for lh, _ := range l.handlers { lh.HandleLog(string(p)) } return } serf-0.6.4/command/agent/log_writer_test.go000066400000000000000000000014241246721563000207460ustar00rootroot00000000000000package agent import ( "testing" ) type MockLogHandler struct { logs []string } func (m *MockLogHandler) HandleLog(l string) { m.logs = append(m.logs, l) } func TestLogWriter(t *testing.T) { h := &MockLogHandler{} w := NewLogWriter(4) // Write some logs w.Write([]byte("one")) // Gets dropped! w.Write([]byte("two")) w.Write([]byte("three")) w.Write([]byte("four")) w.Write([]byte("five")) // Register a handler, sends old! w.RegisterHandler(h) w.Write([]byte("six")) w.Write([]byte("seven")) // Deregister w.DeregisterHandler(h) w.Write([]byte("eight")) w.Write([]byte("nine")) out := []string{ "two", "three", "four", "five", "six", "seven", } for idx := range out { if out[idx] != h.logs[idx] { t.Fatalf("mismatch %v", h.logs) } } } serf-0.6.4/command/agent/mdns.go000066400000000000000000000055131246721563000164760ustar00rootroot00000000000000package agent import ( "fmt" "io" "log" "net" "time" "github.com/hashicorp/mdns" ) const ( mdnsPollInterval = 60 * time.Second mdnsQuietInterval = 100 * time.Millisecond ) // AgentMDNS is used to advertise ourself using mDNS and to // attempt to join peers periodically using mDNS queries. type AgentMDNS struct { agent *Agent discover string logger *log.Logger seen map[string]struct{} server *mdns.Server replay bool iface *net.Interface } // NewAgentMDNS is used to create a new AgentMDNS func NewAgentMDNS(agent *Agent, logOutput io.Writer, replay bool, node, discover string, iface *net.Interface, bind net.IP, port int) (*AgentMDNS, error) { // Create the service service, err := mdns.NewMDNSService( node, mdnsName(discover), "", "", port, []net.IP{bind}, []string{fmt.Sprintf("Serf '%s' cluster", discover)}) if err != nil { return nil, err } // Configure mdns server conf := &mdns.Config{ Zone: service, Iface: iface, } // Create the server server, err := mdns.NewServer(conf) if err != nil { return nil, err } // Initialize the AgentMDNS m := &AgentMDNS{ agent: agent, discover: discover, logger: log.New(logOutput, "", log.LstdFlags), seen: make(map[string]struct{}), server: server, replay: replay, iface: iface, } // Start the background workers go m.run() return m, nil } // run is a long running goroutine that scans for new hosts periodically func (m *AgentMDNS) run() { hosts := make(chan *mdns.ServiceEntry, 32) poll := time.After(0) var quiet <-chan time.Time var join []string for { select { case h := <-hosts: // Format the host address addr := net.TCPAddr{IP: h.Addr, Port: h.Port} addrS := addr.String() // Skip if we've handled this host already if _, ok := m.seen[addrS]; ok { continue } // Queue for handling join = append(join, addrS) quiet = time.After(mdnsQuietInterval) case <-quiet: // Attempt the join n, err := m.agent.Join(join, m.replay) if err != nil { m.logger.Printf("[ERR] agent.mdns: Failed to join: %v", err) } if n > 0 { m.logger.Printf("[INFO] agent.mdns: Joined %d hosts", n) } // Mark all as seen for _, n := range join { m.seen[n] = struct{}{} } join = nil case <-poll: poll = time.After(mdnsPollInterval) go m.poll(hosts) } } } // poll is invoked periodically to check for new hosts func (m *AgentMDNS) poll(hosts chan *mdns.ServiceEntry) { params := mdns.QueryParam{ Service: mdnsName(m.discover), Interface: m.iface, Entries: hosts, } if err := mdns.Query(¶ms); err != nil { m.logger.Printf("[ERR] agent.mdns: Failed to poll for new hosts: %v", err) } } // mdnsName returns the service name to register and to lookup func mdnsName(discover string) string { return fmt.Sprintf("_serf_%s._tcp", discover) } serf-0.6.4/command/agent/rpc_client_test.go000066400000000000000000000505251246721563000207210ustar00rootroot00000000000000package agent import ( "bytes" "encoding/base64" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" "io" "net" "os" "strings" "testing" "time" ) func testRPCClient(t *testing.T) (*client.RPCClient, *Agent, *AgentIPC) { agentConf := DefaultConfig() serfConf := serf.DefaultConfig() return testRPCClientWithConfig(t, agentConf, serfConf) } // testRPCClient returns an RPCClient connected to an RPC server that // serves only this connection. func testRPCClientWithConfig(t *testing.T, agentConf *Config, serfConf *serf.Config) (*client.RPCClient, *Agent, *AgentIPC) { l, err := net.Listen("tcp", "127.0.0.1:0") if err != nil { t.Fatalf("err: %s", err) } lw := NewLogWriter(512) mult := io.MultiWriter(os.Stderr, lw) agent := testAgentWithConfig(agentConf, serfConf, mult) ipc := NewAgentIPC(agent, "", l, mult, lw) rpcClient, err := client.NewRPCClient(l.Addr().String()) if err != nil { t.Fatalf("err: %s", err) } return rpcClient, agent, ipc } func findMember(t *testing.T, members []serf.Member, name string) serf.Member { for _, m := range members { if m.Name == name { return m } } t.Fatalf("%s not found", name) return serf.Member{} } func TestRPCClientForceLeave(t *testing.T) { client, a1, ipc := testRPCClient(t) a2 := testAgent(nil) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() s2Addr := a2.conf.MemberlistConfig.BindAddr if _, err := a1.Join([]string{s2Addr}, false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if err := a2.Shutdown(); err != nil { t.Fatalf("err: %s", err) } start := time.Now() WAIT: time.Sleep(a1.conf.MemberlistConfig.ProbeInterval * 3) m := a1.Serf().Members() if len(m) != 2 { t.Fatalf("should have 2 members: %#v", a1.Serf().Members()) } if findMember(t, m, a2.conf.NodeName).Status != serf.StatusFailed && time.Now().Sub(start) < 3*time.Second { goto WAIT } if err := client.ForceLeave(a2.conf.NodeName); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() m = a1.Serf().Members() if len(m) != 2 { t.Fatalf("should have 2 members: %#v", a1.Serf().Members()) } if findMember(t, m, a2.conf.NodeName).Status != serf.StatusLeft { t.Fatalf("should be left: %#v", m[1]) } } func TestRPCClientJoin(t *testing.T) { client, a1, ipc := testRPCClient(t) a2 := testAgent(nil) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() n, err := client.Join([]string{a2.conf.MemberlistConfig.BindAddr}, false) if err != nil { t.Fatalf("err: %s", err) } if n != 1 { t.Fatalf("n != 1: %d", n) } } func TestRPCClientMembers(t *testing.T) { client, a1, ipc := testRPCClient(t) a2 := testAgent(nil) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() mem, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("bad: %#v", mem) } _, err = client.Join([]string{a2.conf.MemberlistConfig.BindAddr}, false) if err != nil { t.Fatalf("err: %s", err) } testutil.Yield() mem, err = client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 2 { t.Fatalf("bad: %#v", mem) } } func TestRPCClientMembersFiltered(t *testing.T) { client, a1, ipc := testRPCClient(t) a2 := testAgent(nil) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() _, err := client.Join([]string{a2.conf.MemberlistConfig.BindAddr}, false) if err != nil { t.Fatalf("err: %s", err) } err = client.UpdateTags(map[string]string{ "tag1": "val1", "tag2": "val2", }, []string{}) if err != nil { t.Fatalf("bad: %s", err) } testutil.Yield() // Make sure that filters work on member names mem, err := client.MembersFiltered(map[string]string{}, "", ".*") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) == 0 { t.Fatalf("should have matched more than 0 members") } mem, err = client.MembersFiltered(map[string]string{}, "", "bad") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) != 0 { t.Fatalf("should have matched 0 members: %#v", mem) } // Make sure that filters work on member tags mem, err = client.MembersFiltered(map[string]string{"tag1": "val.*"}, "", "") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) != 1 { t.Fatalf("should have matched 1 member: %#v", mem) } // Make sure tag filters work on multiple tags mem, err = client.MembersFiltered(map[string]string{ "tag1": "val.*", "tag2": "val2", }, "", "") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) != 1 { t.Fatalf("should have matched one member: %#v", mem) } // Make sure all tags match when multiple tags are passed mem, err = client.MembersFiltered(map[string]string{ "tag1": "val1", "tag2": "bad", }, "", "") if err != nil { t.Fatalf("bad: %s", err) } if len(mem) != 0 { t.Fatalf("should have matched 0 members: %#v", mem) } // Make sure that filters work on member status if err := client.ForceLeave(a2.conf.NodeName); err != nil { t.Fatalf("bad: %s", err) } mem, err = client.MembersFiltered(map[string]string{}, "alive", "") if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("should have matched 1 member: %#v", mem) } mem, err = client.MembersFiltered(map[string]string{}, "leaving", "") if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("should have matched 1 member: %#v", mem) } } func TestRPCClientUserEvent(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() handler := new(MockEventHandler) a1.RegisterEventHandler(handler) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if err := client.UserEvent("deploy", []byte("foo"), false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() handler.Lock() defer handler.Unlock() if len(handler.Events) == 0 { t.Fatal("no events") } serfEvent, ok := handler.Events[len(handler.Events)-1].(serf.UserEvent) if !ok { t.Fatalf("bad: %#v", serfEvent) } if serfEvent.Name != "deploy" { t.Fatalf("bad: %#v", serfEvent) } if string(serfEvent.Payload) != "foo" { t.Fatalf("bad: %#v", serfEvent) } } func TestRPCClientLeave(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() testutil.Yield() if err := client.Leave(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case <-a1.ShutdownCh(): default: t.Fatalf("agent should be shutdown!") } } func TestRPCClientMonitor(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } eventCh := make(chan string, 64) if handle, err := client.Monitor("debug", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer client.Stop(handle) } testutil.Yield() select { case e := <-eventCh: if !strings.Contains(e, "Accepted client") { t.Fatalf("bad: %s", e) } default: t.Fatalf("should have backlog") } // Drain the rest of the messages as we know it drainEventCh(eventCh) // Join a bad thing to generate more events a1.Join(nil, false) testutil.Yield() select { case e := <-eventCh: if !strings.Contains(e, "joining") { t.Fatalf("bad: %s", e) } default: t.Fatalf("should have message") } } func TestRPCClientStream_User(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } eventCh := make(chan map[string]interface{}, 64) if handle, err := client.Stream("user", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer client.Stop(handle) } testutil.Yield() if err := client.UserEvent("deploy", []byte("foo"), false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case e := <-eventCh: if e["Event"].(string) != "user" { t.Fatalf("bad event: %#v", e) } if e["LTime"].(int64) != 1 { t.Fatalf("bad event: %#v", e) } if e["Name"].(string) != "deploy" { t.Fatalf("bad event: %#v", e) } if bytes.Compare(e["Payload"].([]byte), []byte("foo")) != 0 { t.Fatalf("bad event: %#v", e) } if e["Coalesce"].(bool) != false { t.Fatalf("bad event: %#v", e) } default: t.Fatalf("should have event") } } func TestRPCClientStream_Member(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() a2 := testAgent(nil) defer a2.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } if err := a2.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() eventCh := make(chan map[string]interface{}, 64) if handle, err := client.Stream("*", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer client.Stop(handle) } testutil.Yield() s2Addr := a2.conf.MemberlistConfig.BindAddr if _, err := a1.Join([]string{s2Addr}, false); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case e := <-eventCh: if e["Event"].(string) != "member-join" { t.Fatalf("bad event: %#v", e) } members := e["Members"].([]interface{}) if len(members) != 1 { t.Fatalf("should have 1 member") } member := members[0].(map[interface{}]interface{}) if _, ok := member["Name"].(string); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["Addr"].([]uint8); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["Port"].(uint64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["Tags"].(map[interface{}]interface{}); !ok { t.Fatalf("bad event: %#v", e) } if stat, _ := member["Status"].(string); stat != "alive" { t.Fatalf("bad event: %#v", e) } if _, ok := member["ProtocolMin"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["ProtocolMax"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["ProtocolCur"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["DelegateMin"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["DelegateMax"].(int64); !ok { t.Fatalf("bad event: %#v", e) } if _, ok := member["DelegateCur"].(int64); !ok { t.Fatalf("bad event: %#v", e) } default: t.Fatalf("should have event") } } func TestRPCClientUpdateTags(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() mem, err := client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("bad: %#v", mem) } m0 := mem[0] if _, ok := m0.Tags["testing"]; ok { t.Fatalf("have testing tag") } if err := client.UpdateTags(map[string]string{"testing": "1"}, nil); err != nil { t.Fatalf("err: %s", err) } mem, err = client.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("bad: %#v", mem) } m0 = mem[0] if _, ok := m0.Tags["testing"]; !ok { t.Fatalf("missing testing tag") } } func TestRPCClientQuery(t *testing.T) { cl, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer cl.Close() defer a1.Shutdown() handler := new(MockQueryHandler) handler.Response = []byte("ok") a1.RegisterEventHandler(handler) if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() ackCh := make(chan string, 1) respCh := make(chan client.NodeResponse, 1) params := client.QueryParam{ RequestAck: true, Timeout: 200 * time.Millisecond, Name: "deploy", Payload: []byte("foo"), AckCh: ackCh, RespCh: respCh, } if err := cl.Query(¶ms); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() handler.Lock() defer handler.Unlock() if len(handler.Queries) == 0 { t.Fatal("no queries") } query := handler.Queries[0] if query.Name != "deploy" { t.Fatalf("bad: %#v", query) } if string(query.Payload) != "foo" { t.Fatalf("bad: %#v", query) } select { case a := <-ackCh: if a != a1.conf.NodeName { t.Fatalf("Bad ack from: %v", a) } default: t.Fatalf("missing ack") } select { case r := <-respCh: if r.From != a1.conf.NodeName { t.Fatalf("Bad resp from: %v", r) } if string(r.Payload) != "ok" { t.Fatalf("Bad resp from: %v", r) } default: t.Fatalf("missing response") } } func TestRPCClientStream_Query(t *testing.T) { cl, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer cl.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } eventCh := make(chan map[string]interface{}, 64) if handle, err := cl.Stream("query", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer cl.Stop(handle) } testutil.Yield() params := client.QueryParam{ Timeout: 200 * time.Millisecond, Name: "deploy", Payload: []byte("foo"), } if err := cl.Query(¶ms); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case e := <-eventCh: if e["Event"].(string) != "query" { t.Fatalf("bad query: %#v", e) } if e["ID"].(int64) != 1 { t.Fatalf("bad query: %#v", e) } if e["LTime"].(int64) != 1 { t.Fatalf("bad query: %#v", e) } if e["Name"].(string) != "deploy" { t.Fatalf("bad query: %#v", e) } if bytes.Compare(e["Payload"].([]byte), []byte("foo")) != 0 { t.Fatalf("bad query: %#v", e) } default: t.Fatalf("should have query") } } func TestRPCClientStream_Query_Respond(t *testing.T) { cl, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer cl.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } eventCh := make(chan map[string]interface{}, 64) if handle, err := cl.Stream("query", eventCh); err != nil { t.Fatalf("err: %s", err) } else { defer cl.Stop(handle) } testutil.Yield() ackCh := make(chan string, 1) respCh := make(chan client.NodeResponse, 1) params := client.QueryParam{ RequestAck: true, Timeout: 500 * time.Millisecond, Name: "ping", AckCh: ackCh, RespCh: respCh, } if err := cl.Query(¶ms); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() select { case e := <-eventCh: if e["Event"].(string) != "query" { t.Fatalf("bad query: %#v", e) } if e["Name"].(string) != "ping" { t.Fatalf("bad query: %#v", e) } // Send a response id := e["ID"].(int64) if err := cl.Respond(uint64(id), []byte("pong")); err != nil { t.Fatalf("err: %v", err) } default: t.Fatalf("should have query") } testutil.Yield() // Should have ack select { case a := <-ackCh: if a != a1.conf.NodeName { t.Fatalf("Bad ack from: %v", a) } default: t.Fatalf("missing ack") } // Should have response select { case r := <-respCh: if r.From != a1.conf.NodeName { t.Fatalf("Bad resp from: %v", r) } if string(r.Payload) != "pong" { t.Fatalf("Bad resp from: %v", r) } default: t.Fatalf("missing response") } } func TestRPCClientAuth(t *testing.T) { cl, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer cl.Close() defer a1.Shutdown() // Setup an auth key ipc.authKey = "foobar" if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() if err := cl.UserEvent("deploy", nil, false); err.Error() != authRequired { t.Fatalf("err: %s", err) } testutil.Yield() config := client.Config{Addr: ipc.listener.Addr().String(), AuthKey: "foobar"} rpcClient, err := client.ClientFromConfig(&config) if err != nil { t.Fatalf("err: %s", err) } defer rpcClient.Close() if err := rpcClient.UserEvent("deploy", nil, false); err != nil { t.Fatalf("err: %s", err) } } func TestRPCClient_Keys_EncryptionDisabledError(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } // Failed installing key failures, err := client.InstallKey("El/H8lEqX2WiUa36SxcpZw==") if err == nil { t.Fatalf("expected encryption disabled error") } if _, ok := failures[a1.conf.NodeName]; !ok { t.Fatalf("expected error from node %s", a1.conf.NodeName) } // Failed using key failures, err = client.UseKey("El/H8lEqX2WiUa36SxcpZw==") if err == nil { t.Fatalf("expected encryption disabled error") } if _, ok := failures[a1.conf.NodeName]; !ok { t.Fatalf("expected error from node %s", a1.conf.NodeName) } // Failed removing key failures, err = client.RemoveKey("El/H8lEqX2WiUa36SxcpZw==") if err == nil { t.Fatalf("expected encryption disabled error") } if _, ok := failures[a1.conf.NodeName]; !ok { t.Fatalf("expected error from node %s", a1.conf.NodeName) } // Failed listing keys _, _, failures, err = client.ListKeys() if err == nil { t.Fatalf("expected encryption disabled error") } if _, ok := failures[a1.conf.NodeName]; !ok { t.Fatalf("expected error from node %s", a1.conf.NodeName) } } func TestRPCClient_Keys(t *testing.T) { newKey := "El/H8lEqX2WiUa36SxcpZw==" existing := "A2xzjs0eq9PxSV2+dPi3sg==" existingBytes, err := base64.StdEncoding.DecodeString(existing) if err != nil { t.Fatalf("err: %s", err) } agentConf := DefaultConfig() serfConf := serf.DefaultConfig() serfConf.MemberlistConfig.SecretKey = existingBytes client, a1, ipc := testRPCClientWithConfig(t, agentConf, serfConf) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() keys, num, _, err := client.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if _, ok := keys[newKey]; ok { t.Fatalf("have new key: %s", newKey) } // Trying to use a key that doesn't exist errors if _, err := client.UseKey(newKey); err == nil { t.Fatalf("expected use-key error: %s", newKey) } // Keyring should not contain new key at this point keys, _, _, err = client.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if _, ok := keys[newKey]; ok { t.Fatalf("have new key: %s", newKey) } // Invalid key installation throws an error if _, err := client.InstallKey("badkey"); err == nil { t.Fatalf("expected bad key error") } // InstallKey should succeed if _, err := client.InstallKey(newKey); err != nil { t.Fatalf("err: %s", err) } // InstallKey is idempotent if _, err := client.InstallKey(newKey); err != nil { t.Fatalf("err: %s", err) } // New key should now appear in the list of keys keys, num, _, err = client.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if num != 1 { t.Fatalf("expected 1 member total, got %d", num) } if _, ok := keys[newKey]; !ok { t.Fatalf("key not found: %s", newKey) } // Counter of installed copies of new key should be 1 if keys[newKey] != 1 { t.Fatalf("expected 1 member with key %s, have %d", newKey, keys[newKey]) } // Deleting primary key should return error if _, err := client.RemoveKey(existing); err == nil { t.Fatalf("expected error deleting primary key: %s", newKey) } // UseKey succeeds when given a key that exists if _, err := client.UseKey(newKey); err != nil { t.Fatalf("err: %s", err) } // UseKey is idempotent if _, err := client.UseKey(newKey); err != nil { t.Fatalf("err: %s", err) } // Removing a non-primary key should succeed if _, err := client.RemoveKey(newKey); err == nil { t.Fatalf("expected error deleting primary key: %s", newKey) } // RemoveKey is idempotent if _, err := client.RemoveKey(existing); err != nil { t.Fatalf("err: %s", err) } } func TestRPCClientStats(t *testing.T) { client, a1, ipc := testRPCClient(t) defer ipc.Shutdown() defer client.Close() defer a1.Shutdown() if err := a1.Start(); err != nil { t.Fatalf("err: %s", err) } testutil.Yield() stats, err := client.Stats() if err != nil { t.Fatalf("err: %v", err) } if stats["agent"]["name"] != a1.conf.NodeName { t.Fatalf("bad: %v", stats) } } serf-0.6.4/command/agent/syslog.go000066400000000000000000000021071246721563000170510ustar00rootroot00000000000000package agent import ( "bytes" "github.com/hashicorp/go-syslog" ) // levelPriority is used to map a log level to a // syslog priority level var levelPriority = map[string]gsyslog.Priority{ "TRACE": gsyslog.LOG_DEBUG, "DEBUG": gsyslog.LOG_INFO, "INFO": gsyslog.LOG_NOTICE, "WARN": gsyslog.LOG_WARNING, "ERR": gsyslog.LOG_ERR, "CRIT": gsyslog.LOG_CRIT, } // SyslogWrapper is used to cleaup log messages before // writing them to a Syslogger. Implements the io.Writer // interface. type SyslogWrapper struct { l gsyslog.Syslogger } // Write is used to implement io.Writer func (s *SyslogWrapper) Write(p []byte) (int, error) { // Extract log level var level string afterLevel := p x := bytes.IndexByte(p, '[') if x >= 0 { y := bytes.IndexByte(p[x:], ']') if y >= 0 { level = string(p[x+1 : x+y]) afterLevel = p[x+y+2:] } } // Each log level will be handled by a specific syslog priority priority, ok := levelPriority[level] if !ok { priority = gsyslog.LOG_NOTICE } // Attempt the write err := s.l.WriteLevel(priority, afterLevel) return len(p), err } serf-0.6.4/command/agent/util.go000066400000000000000000000007431246721563000165120ustar00rootroot00000000000000package agent import ( "runtime" "strconv" ) // runtimeStats is used to return various runtime information func runtimeStats() map[string]string { return map[string]string{ "os": runtime.GOOS, "arch": runtime.GOARCH, "version": runtime.Version(), "max_procs": strconv.FormatInt(int64(runtime.GOMAXPROCS(0)), 10), "goroutines": strconv.FormatInt(int64(runtime.NumGoroutine()), 10), "cpu_count": strconv.FormatInt(int64(runtime.NumCPU()), 10), } } serf-0.6.4/command/agent/util_test.go000066400000000000000000000021671246721563000175530ustar00rootroot00000000000000package agent import ( "fmt" "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" "io" "math/rand" "net" "os" "time" ) func init() { // Seed the random number generator rand.Seed(time.Now().UnixNano()) } func drainEventCh(ch <-chan string) { for { select { case <-ch: default: return } } } func getRPCAddr() string { for i := 0; i < 500; i++ { l, err := net.Listen("tcp", fmt.Sprintf(":%d", rand.Int31n(25000)+1024)) if err == nil { l.Close() return l.Addr().String() } } panic("no listener") } func testAgent(logOutput io.Writer) *Agent { return testAgentWithConfig(DefaultConfig(), serf.DefaultConfig(), logOutput) } func testAgentWithConfig(agentConfig *Config, serfConfig *serf.Config, logOutput io.Writer) *Agent { if logOutput == nil { logOutput = os.Stderr } serfConfig.MemberlistConfig.ProbeInterval = 100 * time.Millisecond serfConfig.MemberlistConfig.BindAddr = testutil.GetBindAddr().String() serfConfig.NodeName = serfConfig.MemberlistConfig.BindAddr agent, err := Create(agentConfig, serfConfig, logOutput) if err != nil { panic(err) } return agent } serf-0.6.4/command/event.go000066400000000000000000000041551246721563000155610ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "strings" ) // EventCommand is a Command implementation that queries a running // Serf agent what members are part of the cluster currently. type EventCommand struct { Ui cli.Ui } func (c *EventCommand) Help() string { helpText := ` Usage: serf event [options] name payload Dispatches a custom event across the Serf cluster. Options: -coalesce=true/false Whether this event can be coalesced. This means that repeated events of the same name within a short period of time are ignored, except the last one received. Default is true. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *EventCommand) Run(args []string) int { var coalesce bool cmdFlags := flag.NewFlagSet("event", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.BoolVar(&coalesce, "coalesce", true, "coalesce") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } args = cmdFlags.Args() if len(args) < 1 { c.Ui.Error("An event name must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } else if len(args) > 2 { c.Ui.Error("Too many command line arguments. Only a name and payload must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } event := args[0] var payload []byte if len(args) == 2 { payload = []byte(args[1]) } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() if err := client.UserEvent(event, payload, coalesce); err != nil { c.Ui.Error(fmt.Sprintf("Error sending event: %s", err)) return 1 } c.Ui.Output(fmt.Sprintf("Event '%s' dispatched! Coalescing enabled: %#v", event, coalesce)) return 0 } func (c *EventCommand) Synopsis() string { return "Send a custom event through the Serf cluster" } serf-0.6.4/command/event_test.go000066400000000000000000000014761246721563000166230ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestEventCommand_implements(t *testing.T) { var _ cli.Command = &EventCommand{} } func TestEventCommandRun_noEvent(t *testing.T) { ui := new(cli.MockUi) c := &EventCommand{Ui: ui} args := []string{"-rpc-addr=foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "event name") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } func TestEventCommandRun_tooMany(t *testing.T) { ui := new(cli.MockUi) c := &EventCommand{Ui: ui} args := []string{"-rpc-addr=foo", "foo", "bar", "baz"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "Too many") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } serf-0.6.4/command/force_leave.go000066400000000000000000000034161246721563000167110ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "strings" ) // ForceLeaveCommand is a Command implementation that tells a running Serf // to force a member to enter the "left" state. type ForceLeaveCommand struct { Ui cli.Ui } func (c *ForceLeaveCommand) Run(args []string) int { cmdFlags := flag.NewFlagSet("join", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } nodes := cmdFlags.Args() if len(nodes) != 1 { c.Ui.Error("A node name must be specified to force leave.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() err = client.ForceLeave(nodes[0]) if err != nil { c.Ui.Error(fmt.Sprintf("Error force leaving: %s", err)) return 1 } return 0 } func (c *ForceLeaveCommand) Synopsis() string { return "Forces a member of the cluster to enter the \"left\" state" } func (c *ForceLeaveCommand) Help() string { helpText := ` Usage: serf force-leave [options] name Forces a member of a Serf cluster to enter the "left" state. Note that if the member is still actually alive, it will eventually rejoin the cluster. This command is most useful for cleaning out "failed" nodes that are never coming back. If you do not force leave a failed node, Serf will attempt to reconnect to those failed nodes for some period of time before eventually reaping them. Options: -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } serf-0.6.4/command/force_leave_test.go000066400000000000000000000033561246721563000177530ustar00rootroot00000000000000package command import ( "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" "github.com/mitchellh/cli" "strings" "testing" "time" ) func TestForceLeaveCommand_implements(t *testing.T) { var _ cli.Command = &ForceLeaveCommand{} } func TestForceLeaveCommandRun(t *testing.T) { a1 := testAgent(t) a2 := testAgent(t) defer a1.Shutdown() defer a2.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() _, err := a1.Join([]string{a2.SerfConfig().MemberlistConfig.BindAddr}, false) if err != nil { t.Fatalf("err: %s", err) } testutil.Yield() // Forcibly shutdown a2 so that it appears "failed" in a1 if err := a2.Serf().Shutdown(); err != nil { t.Fatalf("err: %s", err) } start := time.Now() WAIT: time.Sleep(a2.SerfConfig().MemberlistConfig.ProbeInterval * 3) m := a1.Serf().Members() if len(m) != 2 { t.Fatalf("should have 2 members: %#v", a1.Serf().Members()) } if m[1].Status != serf.StatusFailed && time.Now().Sub(start) < 3*time.Second { goto WAIT } ui := new(cli.MockUi) c := &ForceLeaveCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, a2.SerfConfig().NodeName, } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } m = a1.Serf().Members() if len(m) != 2 { t.Fatalf("should have 2 members: %#v", a1.Serf().Members()) } if m[1].Status != serf.StatusLeft { t.Fatalf("should be left: %#v", m[1]) } } func TestForceLeaveCommandRun_noAddrs(t *testing.T) { ui := new(cli.MockUi) c := &ForceLeaveCommand{Ui: ui} args := []string{"-rpc-addr=foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "node name") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } serf-0.6.4/command/info.go000066400000000000000000000045311246721563000153710ustar00rootroot00000000000000package command import ( "bytes" "flag" "fmt" "github.com/mitchellh/cli" "sort" "strings" ) // InfoCommand is a Command implementation that queries a running // Serf agent for various debugging statistics for operators type InfoCommand struct { Ui cli.Ui } func (i *InfoCommand) Help() string { helpText := ` Usage: serf info [options] Provides debugging information for operators Options: -format If provided, output is returned in the specified format. Valid formats are 'json', and 'text' (default) -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (i *InfoCommand) Run(args []string) int { var format string cmdFlags := flag.NewFlagSet("info", flag.ContinueOnError) cmdFlags.Usage = func() { i.Ui.Output(i.Help()) } cmdFlags.StringVar(&format, "format", "text", "output format") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { i.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() stats, err := client.Stats() if err != nil { i.Ui.Error(fmt.Sprintf("Error querying agent: %s", err)) return 1 } output, err := formatOutput(StatsContainer(stats), format) if err != nil { i.Ui.Error(fmt.Sprintf("Encoding error: %s", err)) return 1 } i.Ui.Output(string(output)) return 0 } func (i *InfoCommand) Synopsis() string { return "Provides debugging information for operators" } type StatsContainer map[string]map[string]string func (s StatsContainer) String() string { var buf bytes.Buffer // Get the keys in sorted order keys := make([]string, 0, len(s)) for key := range s { keys = append(keys, key) } sort.Strings(keys) // Iterate over each top-level key for _, key := range keys { buf.WriteString(fmt.Sprintf(key + ":\n")) // Sort the sub-keys subvals := s[key] subkeys := make([]string, 0, len(subvals)) for k := range subvals { subkeys = append(subkeys, k) } sort.Strings(subkeys) // Iterate over the subkeys for _, subkey := range subkeys { val := subvals[subkey] buf.WriteString(fmt.Sprintf("\t%s = %s\n", subkey, val)) } } return buf.String() } serf-0.6.4/command/info_test.go000066400000000000000000000011411246721563000164220ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestInfoCommand_implements(t *testing.T) { var _ cli.Command = &InfoCommand{} } func TestInfoCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &InfoCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "runtime") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } serf-0.6.4/command/join.go000066400000000000000000000032241246721563000153730ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "strings" ) // JoinCommand is a Command implementation that tells a running Serf // agent to join another. type JoinCommand struct { Ui cli.Ui } func (c *JoinCommand) Help() string { helpText := ` Usage: serf join [options] address ... Tells a running Serf agent (with "serf agent") to join the cluster by specifying at least one existing member. Options: -replay Replay past user events. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *JoinCommand) Run(args []string) int { var replayEvents bool cmdFlags := flag.NewFlagSet("join", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.BoolVar(&replayEvents, "replay", false, "replay") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } addrs := cmdFlags.Args() if len(addrs) == 0 { c.Ui.Error("At least one address to join must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() n, err := client.Join(addrs, replayEvents) if err != nil { c.Ui.Error(fmt.Sprintf("Error joining the cluster: %s", err)) return 1 } c.Ui.Output(fmt.Sprintf( "Successfully joined cluster by contacting %d nodes.", n)) return 0 } func (c *JoinCommand) Synopsis() string { return "Tell Serf agent to join cluster" } serf-0.6.4/command/join_test.go000066400000000000000000000017401246721563000164330ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestJoinCommand_implements(t *testing.T) { var _ cli.Command = &JoinCommand{} } func TestJoinCommandRun(t *testing.T) { a1 := testAgent(t) a2 := testAgent(t) defer a1.Shutdown() defer a2.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &JoinCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, a2.SerfConfig().MemberlistConfig.BindAddr, } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if len(a1.Serf().Members()) != 2 { t.Fatalf("bad: %#v", a1.Serf().Members()) } } func TestJoinCommandRun_noAddrs(t *testing.T) { ui := new(cli.MockUi) c := &JoinCommand{Ui: ui} args := []string{"-rpc-addr=foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "one address") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } serf-0.6.4/command/keygen.go000066400000000000000000000017701246721563000157220ustar00rootroot00000000000000package command import ( "crypto/rand" "encoding/base64" "fmt" "github.com/mitchellh/cli" "strings" ) // KeygenCommand is a Command implementation that generates an encryption // key for use in `serf agent`. type KeygenCommand struct { Ui cli.Ui } func (c *KeygenCommand) Run(_ []string) int { key := make([]byte, 16) n, err := rand.Reader.Read(key) if err != nil { c.Ui.Error(fmt.Sprintf("Error reading random data: %s", err)) return 1 } if n != 16 { c.Ui.Error(fmt.Sprintf("Couldn't read enough entropy. Generate more entropy!")) return 1 } c.Ui.Output(base64.StdEncoding.EncodeToString(key)) return 0 } func (c *KeygenCommand) Synopsis() string { return "Generates a new encryption key" } func (c *KeygenCommand) Help() string { helpText := ` Usage: serf keygen Generates a new encryption key that can be used to configure the agent to encrypt traffic. The output of this command is already in the proper format that the agent expects. ` return strings.TrimSpace(helpText) } serf-0.6.4/command/keygen_test.go000066400000000000000000000010311246721563000167470ustar00rootroot00000000000000package command import ( "encoding/base64" "github.com/mitchellh/cli" "testing" ) func TestKeygenCommand_implements(t *testing.T) { var _ cli.Command = &KeygenCommand{} } func TestKeygenCommand(t *testing.T) { ui := new(cli.MockUi) c := &KeygenCommand{Ui: ui} code := c.Run(nil) if code != 0 { t.Fatalf("bad: %d", code) } output := ui.OutputWriter.String() result, err := base64.StdEncoding.DecodeString(output) if err != nil { t.Fatalf("err: %s", err) } if len(result) != 16 { t.Fatalf("bad: %#v", result) } } serf-0.6.4/command/keys.go000066400000000000000000000134251246721563000154130ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "github.com/ryanuber/columnize" "strings" ) type KeysCommand struct { Ui cli.Ui } func (c *KeysCommand) Help() string { helpText := ` Usage: serf keys [options]... Manage the internal encryption keyring used by Serf. Modifications made by this command will be broadcasted to all members in the cluster and applied locally on each member. Operations of this command are idempotent. To facilitate key rotation, Serf allows for multiple encryption keys to be in use simultaneously. Only one key, the "primary" key, will be used for encrypting messages. All other keys are used for decryption only. All variations of this command will return 0 if all nodes reply and report no errors. If any node fails to respond or reports failure, we return 1. WARNING: Running with multiple encryption keys enabled is recommended as a transition state only. Performance may be impacted by using multiple keys. Options: -install= Install a new key onto Serf's internal keyring. This will enable the key for decryption. The key will not be used to encrypt messages until the primary key is changed. -use= Change the primary key used for encrypting messages. All nodes in the cluster must already have this key installed if they are to continue communicating with eachother. -remove= Remove a key from Serf's internal keyring. The key being removed may not be the current primary key. -list List all currently known keys in the cluster. This will ask all nodes in the cluster for a list of keys and dump a summary containing each key and the number of members it is installed on to the console. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *KeysCommand) Run(args []string) int { var installKey, useKey, removeKey string var lines []string var listKeys bool cmdFlags := flag.NewFlagSet("key", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.StringVar(&installKey, "install", "", "install a new key") cmdFlags.StringVar(&useKey, "use", "", "change primary encryption key") cmdFlags.StringVar(&removeKey, "remove", "", "remove a key") cmdFlags.BoolVar(&listKeys, "list", false, "list cluster keys") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } c.Ui = &cli.PrefixedUi{ OutputPrefix: "", InfoPrefix: "==> ", ErrorPrefix: "", Ui: c.Ui, } // Make sure that we only have one actionable argument to avoid ambiguity found := listKeys for _, arg := range []string{installKey, useKey, removeKey} { if found && len(arg) > 0 { c.Ui.Error("Only one of -install, -use, -remove, or -list allowed") return 1 } found = found || len(arg) > 0 } // Fail fast if no actionable args were passed if !found { c.Ui.Error(c.Help()) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() if listKeys { c.Ui.Info("Asking all members for installed keys...") keys, total, failures, err := client.ListKeys() if err != nil { if len(failures) > 0 { for node, message := range failures { lines = append(lines, fmt.Sprintf("failed: | %s | %s", node, message)) } out := columnize.SimpleFormat(lines) c.Ui.Error(out) } c.Ui.Error("") c.Ui.Error(fmt.Sprintf("Failed to gather member keys: %s", err)) return 1 } c.Ui.Info("Keys gathered, listing cluster keys...") c.Ui.Output("") for key, num := range keys { lines = append(lines, fmt.Sprintf("%s | [%d/%d]", key, num, total)) } out := columnize.SimpleFormat(lines) c.Ui.Output(out) return 0 } if installKey != "" { c.Ui.Info("Installing key on all members...") if failures, err := client.InstallKey(installKey); err != nil { if len(failures) > 0 { for node, message := range failures { lines = append(lines, fmt.Sprintf("failed: | %s | %s", node, message)) } out := columnize.SimpleFormat(lines) c.Ui.Error(out) } c.Ui.Error("") c.Ui.Error(fmt.Sprintf("Error installing key: %s", err)) return 1 } c.Ui.Info("Successfully installed key!") return 0 } if useKey != "" { c.Ui.Info("Changing primary key on all members...") if failures, err := client.UseKey(useKey); err != nil { if len(failures) > 0 { for node, message := range failures { lines = append(lines, fmt.Sprintf("failed: | %s | %s", node, message)) } out := columnize.SimpleFormat(lines) c.Ui.Error(out) } c.Ui.Error("") c.Ui.Error(fmt.Sprintf("Error changing primary key: %s", err)) return 1 } c.Ui.Info("Successfully changed primary key!") return 0 } if removeKey != "" { c.Ui.Info("Removing key on all members...") if failures, err := client.RemoveKey(removeKey); err != nil { if len(failures) > 0 { for node, message := range failures { lines = append(lines, fmt.Sprintf("failed: | %s | %s", node, message)) } out := columnize.SimpleFormat(lines) c.Ui.Error(out) } c.Ui.Error("") c.Ui.Error(fmt.Sprintf("Error removing key: %s", err)) return 1 } c.Ui.Info("Successfully removed key!") return 0 } // Should never reach this point return 0 } func (c *KeysCommand) Synopsis() string { return "Manipulate the internal encryption keyring used by Serf" } serf-0.6.4/command/keys_test.go000066400000000000000000000172371246721563000164570ustar00rootroot00000000000000package command import ( "encoding/base64" "github.com/hashicorp/memberlist" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/command/agent" "github.com/hashicorp/serf/serf" "github.com/mitchellh/cli" "strings" "testing" ) func testKeysCommandAgent(t *testing.T) *agent.Agent { key1, err := base64.StdEncoding.DecodeString("SNCg1bQSoCdGVlEx+TgfBw==") if err != nil { t.Fatalf("err: %s", err) } key2, err := base64.StdEncoding.DecodeString("vbitCcJNwNP4aEWHgofjMg==") if err != nil { t.Fatalf("err: %s", err) } keyring, err := memberlist.NewKeyring([][]byte{key1, key2}, key1) if err != nil { t.Fatalf("err: %s", err) } agentConf := agent.DefaultConfig() serfConf := serf.DefaultConfig() serfConf.MemberlistConfig.Keyring = keyring a1 := testAgentWithConfig(t, agentConf, serfConf) return a1 } func TestKeysCommand_implements(t *testing.T) { var _ cli.Command = &KeysCommand{} } func TestKeysCommandRun_InstallKey(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} rpcClient, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } keys, _, _, err := rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if _, ok := keys["jbuQMI4gMUeh1PPmKOtiBg=="]; ok { t.Fatalf("have test key") } args := []string{ "-rpc-addr=" + rpcAddr, "-install", "jbuQMI4gMUeh1PPmKOtiBg==", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "Successfully installed key") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } keys, _, _, err = rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if _, ok := keys["jbuQMI4gMUeh1PPmKOtiBg=="]; !ok { t.Fatalf("new key not found") } } func TestKeysCommandRun_InstallKeyFailure(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to install with encryption disabled returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-install", "jbuQMI4gMUeh1PPmKOtiBg==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Node errors appear in stderr if !strings.Contains(ui.ErrorWriter.String(), "not enabled") { t.Fatalf("expected empty keyring error") } } func TestKeysCommandRun_UseKey(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to use a non-existent key returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-use", "eodFZZjm7pPwIZ0Miy7boQ==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Using an existing key returns 0 args = []string{ "-rpc-addr=" + rpcAddr, "-use", "vbitCcJNwNP4aEWHgofjMg==", } code = c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } } func TestKeysCommandRun_UseKeyFailure(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to use a key that doesn't exist returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-use", "jbuQMI4gMUeh1PPmKOtiBg==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Node errors appear in stderr if !strings.Contains(ui.ErrorWriter.String(), "not in the keyring") { t.Fatalf("expected absent key error") } } func TestKeysCommandRun_RemoveKey(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} rpcClient, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } keys, _, _, err := rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if len(keys) != 2 { t.Fatalf("expected 2 keys: %v", keys) } // Removing non-existing key still returns 0 (noop) args := []string{ "-rpc-addr=" + rpcAddr, "-remove", "eodFZZjm7pPwIZ0Miy7boQ==", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Number of keys unchanged after noop command keys, _, _, err = rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if len(keys) != 2 { t.Fatalf("expected 2 keys: %v", keys) } // Removing a primary key returns 1 args = []string{ "-rpc-addr=" + rpcAddr, "-remove", "SNCg1bQSoCdGVlEx+TgfBw==", } ui.ErrorWriter.Reset() code = c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.ErrorWriter.String(), "Error removing key") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } // Removing a non-primary, existing key returns 0 args = []string{ "-rpc-addr=" + rpcAddr, "-remove", "vbitCcJNwNP4aEWHgofjMg==", } code = c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Key removed after successful -remove command keys, _, _, err = rpcClient.ListKeys() if err != nil { t.Fatalf("err: %s", err) } if len(keys) != 1 { t.Fatalf("expected 2 keys: %v", keys) } } func TestKeysCommandRun_RemoveKeyFailure(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to remove the primary key returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-remove", "SNCg1bQSoCdGVlEx+TgfBw==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Node errors appear in stderr if !strings.Contains(ui.ErrorWriter.String(), "not allowed") { t.Fatalf("expected primary key removal error") } } func TestKeysCommandRun_ListKeys(t *testing.T) { a1 := testKeysCommandAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-list", } code := c.Run(args) if code == 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "SNCg1bQSoCdGVlEx+TgfBw==") { t.Fatalf("missing expected key") } if !strings.Contains(ui.OutputWriter.String(), "vbitCcJNwNP4aEWHgofjMg==") { t.Fatalf("missing expected key") } } func TestKeysCommandRun_ListKeysFailure(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} // Trying to list keys with encryption disabled returns 1 args := []string{ "-rpc-addr=" + rpcAddr, "-list", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.ErrorWriter.String(), "not enabled") { t.Fatalf("expected empty keyring error") } } func TestKeysCommandRun_BadOptions(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &KeysCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-install", "vbitCcJNwNP4aEWHgofjMg==", "-use", "vbitCcJNwNP4aEWHgofjMg==", } code := c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } args = []string{ "-rpc-addr=" + rpcAddr, "-list", "-remove", "SNCg1bQSoCdGVlEx+TgfBw==", } code = c.Run(args) if code != 1 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } } serf-0.6.4/command/leave.go000066400000000000000000000023671246721563000155370ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/mitchellh/cli" "strings" ) // LeaveCommand is a Command implementation that instructs // the Serf agent to gracefully leave the cluster type LeaveCommand struct { Ui cli.Ui } func (c *LeaveCommand) Help() string { helpText := ` Usage: serf leave Causes the agent to gracefully leave the Serf cluster and shutdown. Options: -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *LeaveCommand) Run(args []string) int { cmdFlags := flag.NewFlagSet("leave", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() if err := client.Leave(); err != nil { c.Ui.Error(fmt.Sprintf("Error leaving: %s", err)) return 1 } c.Ui.Output("Graceful leave complete") return 0 } func (c *LeaveCommand) Synopsis() string { return "Gracefully leaves the Serf cluster and shuts down" } serf-0.6.4/command/leave_test.go000066400000000000000000000011541246721563000165670ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestLeaveCommand_implements(t *testing.T) { var _ cli.Command = &LeaveCommand{} } func TestLeaveCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &LeaveCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "leave complete") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } serf-0.6.4/command/members.go000066400000000000000000000122241246721563000160660ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/hashicorp/serf/command/agent" "github.com/mitchellh/cli" "github.com/ryanuber/columnize" "net" "strings" ) // MembersCommand is a Command implementation that queries a running // Serf agent what members are part of the cluster currently. type MembersCommand struct { Ui cli.Ui } // A container of member details. Maintaining a command-specific struct here // makes sense so that the agent.Member struct can evolve without changing the // keys in the output interface. type Member struct { detail bool Name string `json:"name"` Addr string `json:"addr"` Port uint16 `json:"port"` Tags map[string]string `json:"tags"` Status string `json:"status"` Proto map[string]uint8 `json:"protocol"` } type MemberContainer struct { Members []Member `json:"members"` } func (c MemberContainer) String() string { var result []string for _, member := range c.Members { tags := strings.Join(agent.MarshalTags(member.Tags), ",") line := fmt.Sprintf("%s|%s|%s|%s", member.Name, member.Addr, member.Status, tags) if member.detail { line += fmt.Sprintf( "|Protocol Version: %d|Available Protocol Range: [%d, %d]", member.Proto["version"], member.Proto["min"], member.Proto["max"]) } result = append(result, line) } return columnize.SimpleFormat(result) } func (c *MembersCommand) Help() string { helpText := ` Usage: serf members [options] Outputs the members of a running Serf agent. Options: -detailed Additional information such as protocol verions will be shown (only affects text output format). -format If provided, output is returned in the specified format. Valid formats are 'json', and 'text' (default) -name= If provided, only members matching the regexp are returned. The regexp is anchored at the start and end, and must be a full match. -role= If provided, output is filtered to only nodes matching the regular expression for role '-role' is deprecated in favor of '-tag role=foo'. The regexp is anchored at the start and end, and must be a full match. -status= If provided, output is filtered to only nodes matching the regular expression for status -tag = If provided, output is filtered to only nodes with the tag with value matching the regular expression. tag can be specified multiple times to filter on multiple keys. The regexp is anchored at the start and end, and must be a full match. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *MembersCommand) Run(args []string) int { var detailed bool var roleFilter, statusFilter, nameFilter, format string var tags []string cmdFlags := flag.NewFlagSet("members", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.BoolVar(&detailed, "detailed", false, "detailed output") cmdFlags.StringVar(&roleFilter, "role", "", "role filter") cmdFlags.StringVar(&statusFilter, "status", "", "status filter") cmdFlags.StringVar(&format, "format", "text", "output format") cmdFlags.Var((*agent.AppendSliceValue)(&tags), "tag", "tag filter") cmdFlags.StringVar(&nameFilter, "name", "", "name filter") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } // Deprecation warning for role if roleFilter != "" { c.Ui.Output("Deprecation warning: 'Role' has been replaced with 'Tags'") tags = append(tags, fmt.Sprintf("role=%s", roleFilter)) } reqtags, err := agent.UnmarshalTags(tags) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() members, err := client.MembersFiltered(reqtags, statusFilter, nameFilter) if err != nil { c.Ui.Error(fmt.Sprintf("Error retrieving members: %s", err)) return 1 } result := MemberContainer{} for _, member := range members { addr := net.TCPAddr{IP: member.Addr, Port: int(member.Port)} result.Members = append(result.Members, Member{ detail: detailed, Name: member.Name, Addr: addr.String(), Port: member.Port, Tags: member.Tags, Status: member.Status, Proto: map[string]uint8{ "min": member.DelegateMin, "max": member.DelegateMax, "version": member.DelegateCur, }, }) } output, err := formatOutput(result, format) if err != nil { c.Ui.Error(fmt.Sprintf("Encoding error: %s", err)) return 1 } c.Ui.Output(string(output)) return 0 } func (c *MembersCommand) Synopsis() string { return "Lists the members of a Serf cluster" } serf-0.6.4/command/members_test.go000066400000000000000000000111701246721563000171240ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestMembersCommand_implements(t *testing.T) { var _ cli.Command = &MembersCommand{} } func TestMembersCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_statusFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-status=alive", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_statusFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-status=(failed|left)", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_roleFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-role=test", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_roleFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-role=primary", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_tagFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_tagFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=nomatch", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_mutliTagFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=foo", "-tag=tag2=bar", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestMembersCommandRun_multiTagFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &MembersCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=foo", "-tag=tag2=nomatch", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } serf-0.6.4/command/monitor.go000066400000000000000000000054251246721563000161300ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/hashicorp/logutils" "github.com/mitchellh/cli" "strings" "sync" ) // MonitorCommand is a Command implementation that queries a running // Serf agent what members are part of the cluster currently. type MonitorCommand struct { ShutdownCh <-chan struct{} Ui cli.Ui lock sync.Mutex quitting bool } func (c *MonitorCommand) Help() string { helpText := ` Usage: serf monitor [options] Shows recent log messages of a Serf agent, and attaches to the agent, outputting log messages as they occur in real time. The monitor lets you listen for log levels that may be filtered out of the Serf agent. For example your agent may only be logging at INFO level, but with the monitor you can see the DEBUG level logs. Options: -log-level=info Log level of the agent. -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *MonitorCommand) Run(args []string) int { var logLevel string cmdFlags := flag.NewFlagSet("monitor", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.StringVar(&logLevel, "log-level", "INFO", "log level") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() eventCh := make(chan map[string]interface{}, 1024) streamHandle, err := client.Stream("*", eventCh) if err != nil { c.Ui.Error(fmt.Sprintf("Error starting stream: %s", err)) return 1 } defer client.Stop(streamHandle) logCh := make(chan string, 1024) monHandle, err := client.Monitor(logutils.LogLevel(logLevel), logCh) if err != nil { c.Ui.Error(fmt.Sprintf("Error starting monitor: %s", err)) return 1 } defer client.Stop(monHandle) eventDoneCh := make(chan struct{}) go func() { defer close(eventDoneCh) OUTER: for { select { case log := <-logCh: if log == "" { break OUTER } c.Ui.Info(log) case event := <-eventCh: if event == nil { break OUTER } c.Ui.Info("Event Info:") for key, val := range event { c.Ui.Info(fmt.Sprintf("\t%s: %#v", key, val)) } } } c.lock.Lock() defer c.lock.Unlock() if !c.quitting { c.Ui.Info("") c.Ui.Output("Remote side ended the monitor! This usually means that the\n" + "remote side has exited or crashed.") } }() select { case <-eventDoneCh: return 1 case <-c.ShutdownCh: c.lock.Lock() c.quitting = true c.lock.Unlock() } return 0 } func (c *MonitorCommand) Synopsis() string { return "Stream logs from a Serf agent" } serf-0.6.4/command/output.go000066400000000000000000000015071246721563000157760ustar00rootroot00000000000000package command import ( "encoding/json" "fmt" "strings" ) // Format some raw data for output. For better or worse, this currently forces // the passed data object to implement fmt.Stringer, since it's pretty hard to // implement a canonical *-to-string function. func formatOutput(data interface{}, format string) ([]byte, error) { var out string switch format { case "json": jsonout, err := json.MarshalIndent(data, "", " ") if err != nil { return nil, err } out = string(jsonout) case "text": out = data.(fmt.Stringer).String() default: return nil, fmt.Errorf("Invalid output format \"%s\"", format) } return []byte(prepareOutput(out)), nil } // Apply some final formatting to make sure we don't end up with extra newlines func prepareOutput(in string) string { return strings.TrimSpace(string(in)) } serf-0.6.4/command/output_test.go000066400000000000000000000026351246721563000170400ustar00rootroot00000000000000package command import ( "fmt" "testing" ) type OutputTest struct { XMLName string `json:"-"` TestString string `json:"test_string"` TestInt int `json:"test_int"` TestNil []byte `json:"test_nil"` TestNested OutputTestNested `json:"nested"` } type OutputTestNested struct { NestKey string `json:"nest_key"` } func (o OutputTest) String() string { return fmt.Sprintf("%s %d %s", o.TestString, o.TestInt, o.TestNil) } func TestCommandOutput(t *testing.T) { var formatted []byte result := OutputTest{ TestString: "woooo a string", TestInt: 77, TestNil: nil, TestNested: OutputTestNested{ NestKey: "nest_value", }, } json_expected := `{ "test_string": "woooo a string", "test_int": 77, "test_nil": null, "nested": { "nest_key": "nest_value" } }` formatted, _ = formatOutput(result, "json") if string(formatted) != json_expected { t.Fatalf("bad json:\n%s\n\nexpected:\n%s", formatted, json_expected) } text_expected := "woooo a string 77" formatted, _ = formatOutput(result, "text") if string(formatted) != text_expected { t.Fatalf("bad output:\n\"%s\"\n\nexpected:\n\"%s\"", formatted, text_expected) } error_expected := `Invalid output format "boo"` _, err := formatOutput(result, "boo") if err.Error() != error_expected { t.Fatalf("bad output:\n\"%s\"\n\nexpected:\n\"%s\"", err.Error(), error_expected) } } serf-0.6.4/command/query.go000066400000000000000000000132311246721563000156000ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/command/agent" "github.com/mitchellh/cli" "strings" "time" ) // QueryCommand is a Command implementation that is used to trigger a new // query and wait for responses and acks type QueryCommand struct { ShutdownCh <-chan struct{} Ui cli.Ui } func (c *QueryCommand) Help() string { helpText := ` Usage: serf query [options] name payload Dispatches a query to the Serf cluster. Options: -format If provided, output is returned in the specified format. Valid formats are 'json', and 'text' (default) -node=NAME This flag can be provided multiple times to filter responses to only named nodes. -tag key=regexp This flag can be provided multiple times to filter responses to only nodes matching the tags -timeout="15s" Providing a timeout overrides the default timeout -no-ack Setting this prevents nodes from sending an acknowledgement of the query -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. ` return strings.TrimSpace(helpText) } func (c *QueryCommand) Run(args []string) int { var noAck bool var nodes []string var tags []string var timeout time.Duration var format string cmdFlags := flag.NewFlagSet("event", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.Var((*agent.AppendSliceValue)(&nodes), "node", "node filter") cmdFlags.Var((*agent.AppendSliceValue)(&tags), "tag", "tag filter") cmdFlags.DurationVar(&timeout, "timeout", 0, "query timeout") cmdFlags.BoolVar(&noAck, "no-ack", false, "no-ack") cmdFlags.StringVar(&format, "format", "text", "output format") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } // Setup the filter tags filterTags, err := agent.UnmarshalTags(tags) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return 1 } args = cmdFlags.Args() if len(args) < 1 { c.Ui.Error("A query name must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } else if len(args) > 2 { c.Ui.Error("Too many command line arguments. Only a name and payload must be specified.") c.Ui.Error("") c.Ui.Error(c.Help()) return 1 } name := args[0] var payload []byte if len(args) == 2 { payload = []byte(args[1]) } cl, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer cl.Close() // Setup the the response handler var handler queryRespFormat switch format { case "text": handler = &textQueryRespFormat{ ui: c.Ui, name: name, noAck: noAck, } case "json": handler = &jsonQueryRespFormat{ ui: c.Ui, Responses: make(map[string]string), } default: c.Ui.Error(fmt.Sprintf("Invalid format: %s", format)) return 1 } ackCh := make(chan string, 128) respCh := make(chan client.NodeResponse, 128) params := client.QueryParam{ FilterNodes: nodes, FilterTags: filterTags, RequestAck: !noAck, Timeout: timeout, Name: name, Payload: payload, AckCh: ackCh, RespCh: respCh, } if err := cl.Query(¶ms); err != nil { c.Ui.Error(fmt.Sprintf("Error sending query: %s", err)) return 1 } handler.Started() OUTER: for { select { case a := <-ackCh: if a == "" { break OUTER } handler.AckReceived(a) case r := <-respCh: if r.From == "" { break OUTER } handler.ResponseReceived(r) case <-c.ShutdownCh: return 1 } } if err := handler.Finished(); err != nil { return 1 } return 0 } func (c *QueryCommand) Synopsis() string { return "Send a query to the Serf cluster" } // queryRespFormat is used to switch our handler based on the format type queryRespFormat interface { Started() AckReceived(from string) ResponseReceived(resp client.NodeResponse) Finished() error } // textQueryRespFormat is used to output the results in a human-readable // format that is streamed as results come in type textQueryRespFormat struct { ui cli.Ui name string noAck bool numAcks int numResp int } func (t *textQueryRespFormat) Started() { t.ui.Output(fmt.Sprintf("Query '%s' dispatched", t.name)) } func (t *textQueryRespFormat) AckReceived(from string) { t.numAcks++ t.ui.Info(fmt.Sprintf("Ack from '%s'", from)) } func (t *textQueryRespFormat) ResponseReceived(r client.NodeResponse) { t.numResp++ // Remove the trailing newline if there is one payload := r.Payload if n := len(payload); n > 0 && payload[n-1] == '\n' { payload = payload[:n-1] } t.ui.Info(fmt.Sprintf("Response from '%s': %s", r.From, payload)) } func (t *textQueryRespFormat) Finished() error { if !t.noAck { t.ui.Output(fmt.Sprintf("Total Acks: %d", t.numAcks)) } t.ui.Output(fmt.Sprintf("Total Responses: %d", t.numResp)) return nil } // jsonQueryRespFormat is used to output the results in a JSON format type jsonQueryRespFormat struct { ui cli.Ui Acks []string Responses map[string]string } func (j *jsonQueryRespFormat) Started() {} func (j *jsonQueryRespFormat) AckReceived(from string) { j.Acks = append(j.Acks, from) } func (j *jsonQueryRespFormat) ResponseReceived(r client.NodeResponse) { j.Responses[r.From] = string(r.Payload) } func (j *jsonQueryRespFormat) Finished() error { output, err := formatOutput(j, "json") if err != nil { j.ui.Error(fmt.Sprintf("Encoding error: %s", err)) return err } j.ui.Output(string(output)) return nil } serf-0.6.4/command/query_test.go000066400000000000000000000100341246721563000166350ustar00rootroot00000000000000package command import ( "encoding/json" "github.com/mitchellh/cli" "strings" "testing" ) func TestQUeryCommand_implements(t *testing.T) { var _ cli.Command = &QueryCommand{} } func TestQueryCommandRun_noName(t *testing.T) { ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=foo"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "query name") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } func TestQueryCommandRun_tooMany(t *testing.T) { ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=foo", "foo", "bar", "baz"} code := c.Run(args) if code != 1 { t.Fatalf("bad: %d", code) } if !strings.Contains(ui.ErrorWriter.String(), "Too many") { t.Fatalf("bad: %#v", ui.ErrorWriter.String()) } } func TestQueryCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr, "-timeout=500ms", "deploy", "abcd1234"} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_tagFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=foo", "foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_tagFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-tag=tag1=nomatch", "foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_nodeFilter(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-node", a1.SerfConfig().NodeName, "foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_nodeFilter_failed(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-node=whoisthis", "foo", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } func TestQueryCommandRun_formatJSON(t *testing.T) { type output struct { Acks []string Responses map[string]string } a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &QueryCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr, "-format=json", "-timeout=500ms", "deploy", "abcd1234"} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } // Decode the output dec := json.NewDecoder(ui.OutputWriter) var out output if err := dec.Decode(&out); err != nil { t.Fatalf("Decode err: %v", err) } if out.Acks[0] != a1.SerfConfig().NodeName { t.Fatalf("bad: %#v", out) } } serf-0.6.4/command/reachability.go000066400000000000000000000105511246721563000170750ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/hashicorp/serf/client" "github.com/hashicorp/serf/serf" "github.com/mitchellh/cli" "strings" "time" ) const ( tooManyAcks = `This could mean Serf is detecting false-failures due to a misconfiguration or network issue.` tooFewAcks = `This could mean Serf gossip packets are being lost due to a misconfiguration or network issue.` duplicateResponses = `Duplicate responses means there is a misconfiguration. Verify that node names are unique.` troubleshooting = ` Troubleshooting tips: * Ensure that the bind addr:port is accessible by all other nodes * If an advertise address is set, ensure it routes to the bind address * Check that no nodes are behind a NAT * If nodes are behind firewalls or iptables, check that Serf traffic is permitted (UDP and TCP) * Verify networking equipment is functional` ) // ReachabilityCommand is a Command implementation that is used to trigger // a new reachability test type ReachabilityCommand struct { ShutdownCh <-chan struct{} Ui cli.Ui } func (c *ReachabilityCommand) Help() string { helpText := ` Usage: serf reachability [options] Tests the network reachability of this node Options: -rpc-addr=127.0.0.1:7373 RPC address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. -verbose Verbose mode ` return strings.TrimSpace(helpText) } func (c *ReachabilityCommand) Run(args []string) int { var verbose bool cmdFlags := flag.NewFlagSet("reachability", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.BoolVar(&verbose, "verbose", false, "verbose mode") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } cl, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer cl.Close() ackCh := make(chan string, 128) // Get the list of members members, err := cl.Members() if err != nil { c.Ui.Error(fmt.Sprintf("Error getting members: %s", err)) return 1 } // Get only the live members liveMembers := make(map[string]struct{}) for _, m := range members { if m.Status == "alive" { liveMembers[m.Name] = struct{}{} } } c.Ui.Output(fmt.Sprintf("Total members: %d, live members: %d", len(members), len(liveMembers))) // Start the query params := client.QueryParam{ RequestAck: true, Name: serf.InternalQueryPrefix + "ping", AckCh: ackCh, } if err := cl.Query(¶ms); err != nil { c.Ui.Error(fmt.Sprintf("Error sending query: %s", err)) return 1 } c.Ui.Output("Starting reachability test...") start := time.Now() last := time.Now() // Track responses and acknowledgements exit := 0 dups := false numAcks := 0 acksFrom := make(map[string]struct{}, len(members)) OUTER: for { select { case a := <-ackCh: if a == "" { break OUTER } if verbose { c.Ui.Output(fmt.Sprintf("\tAck from '%s'", a)) } numAcks++ if _, ok := acksFrom[a]; ok { dups = true c.Ui.Output(fmt.Sprintf("Duplicate response from '%v'", a)) } acksFrom[a] = struct{}{} last = time.Now() case <-c.ShutdownCh: c.Ui.Error("Test interrupted") return 1 } } if verbose { total := float64(time.Now().Sub(start)) / float64(time.Second) timeToLast := float64(last.Sub(start)) / float64(time.Second) c.Ui.Output(fmt.Sprintf("Query time: %0.2f sec, time to last response: %0.2f sec", total, timeToLast)) } // Print troubleshooting info for duplicate responses if dups { c.Ui.Output(duplicateResponses) exit = 1 } n := len(liveMembers) if numAcks == n { c.Ui.Output("Successfully contacted all live nodes") } else if numAcks > n { c.Ui.Output("Received more acks than live nodes! Acks from non-live nodes:") for m := range acksFrom { if _, ok := liveMembers[m]; !ok { c.Ui.Output(fmt.Sprintf("\t%s", m)) } } c.Ui.Output(tooManyAcks) c.Ui.Output(troubleshooting) return 1 } else if numAcks < n { c.Ui.Output("Received less acks than live nodes! Missing acks from:") for m := range liveMembers { if _, ok := acksFrom[m]; !ok { c.Ui.Output(fmt.Sprintf("\t%s", m)) } } c.Ui.Output(tooFewAcks) c.Ui.Output(troubleshooting) return 1 } return exit } func (c *ReachabilityCommand) Synopsis() string { return "Test network reachability" } serf-0.6.4/command/reachability_test.go000066400000000000000000000014231246721563000201320ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "strings" "testing" ) func TestReachabilityCommand_implements(t *testing.T) { var _ cli.Command = &ReachabilityCommand{} } func TestReachabilityCommand_Run(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &ReachabilityCommand{Ui: ui} args := []string{"-rpc-addr=" + rpcAddr, "-verbose"} code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), a1.SerfConfig().NodeName) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "Successfully") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } serf-0.6.4/command/rpc.go000066400000000000000000000017311246721563000152210ustar00rootroot00000000000000package command import ( "flag" "github.com/hashicorp/serf/client" "os" ) // RPCAddrFlag returns a pointer to a string that will be populated // when the given flagset is parsed with the RPC address of the Serf. func RPCAddrFlag(f *flag.FlagSet) *string { defaultRpcAddr := os.Getenv("SERF_RPC_ADDR") if defaultRpcAddr == "" { defaultRpcAddr = "127.0.0.1:7373" } return f.String("rpc-addr", defaultRpcAddr, "RPC address of the Serf agent") } // RPCAuthFlag returns a pointer to a string that will be populated // when the given flagset is parsed with the RPC auth token of the Serf. func RPCAuthFlag(f *flag.FlagSet) *string { rpcAuth := os.Getenv("SERF_RPC_AUTH") return f.String("rpc-auth", rpcAuth, "RPC auth token of the Serf agent") } // RPCClient returns a new Serf RPC client with the given address. func RPCClient(addr, auth string) (*client.RPCClient, error) { config := client.Config{Addr: addr, AuthKey: auth} return client.ClientFromConfig(&config) } serf-0.6.4/command/tags.go000066400000000000000000000035001246721563000153670ustar00rootroot00000000000000package command import ( "flag" "fmt" "github.com/hashicorp/serf/command/agent" "github.com/mitchellh/cli" "strings" ) // TagsCommand is an interface to dynamically add or otherwise modify a // running serf agent's tags. type TagsCommand struct { Ui cli.Ui } func (c *TagsCommand) Help() string { helpText := ` Usage: serf tags [options] ... Modifies tags on a running Serf agent. Options: -rpc-addr=127.0.0.1:7373 RPC Address of the Serf agent. -rpc-auth="" RPC auth token of the Serf agent. -set key=value Creates or modifies the value of a tag -delete key Removes a tag, if present ` return strings.TrimSpace(helpText) } func (c *TagsCommand) Run(args []string) int { var tagPairs []string var delTags []string cmdFlags := flag.NewFlagSet("tags", flag.ContinueOnError) cmdFlags.Usage = func() { c.Ui.Output(c.Help()) } cmdFlags.Var((*agent.AppendSliceValue)(&tagPairs), "set", "tag pairs, specified as key=value") cmdFlags.Var((*agent.AppendSliceValue)(&delTags), "delete", "tag keys to unset") rpcAddr := RPCAddrFlag(cmdFlags) rpcAuth := RPCAuthFlag(cmdFlags) if err := cmdFlags.Parse(args); err != nil { return 1 } if len(tagPairs) == 0 && len(delTags) == 0 { c.Ui.Output(c.Help()) return 1 } client, err := RPCClient(*rpcAddr, *rpcAuth) if err != nil { c.Ui.Error(fmt.Sprintf("Error connecting to Serf agent: %s", err)) return 1 } defer client.Close() tags, err := agent.UnmarshalTags(tagPairs) if err != nil { c.Ui.Error(fmt.Sprintf("Error: %s", err)) return 1 } if err := client.UpdateTags(tags, delTags); err != nil { c.Ui.Error(fmt.Sprintf("Error setting tags: %s", err)) return 1 } c.Ui.Output("Successfully updated agent tags") return 0 } func (c *TagsCommand) Synopsis() string { return "Modify tags of a running Serf agent" } serf-0.6.4/command/tags_test.go000066400000000000000000000022301246721563000164250ustar00rootroot00000000000000package command import ( "github.com/hashicorp/serf/client" "github.com/mitchellh/cli" "strings" "testing" ) func TestTagsCommand_implements(t *testing.T) { var _ cli.Command = &TagsCommand{} } func TestTagsCommandRun(t *testing.T) { a1 := testAgent(t) defer a1.Shutdown() rpcAddr, ipc := testIPC(t, a1) defer ipc.Shutdown() ui := new(cli.MockUi) c := &TagsCommand{Ui: ui} args := []string{ "-rpc-addr=" + rpcAddr, "-delete", "tag2", "-set", "a=1", "-set", "b=2", } code := c.Run(args) if code != 0 { t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) } if !strings.Contains(ui.OutputWriter.String(), "Successfully updated agent tags") { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } rpcClient, err := client.NewRPCClient(rpcAddr) if err != nil { t.Fatalf("err: %s", err) } mem, err := rpcClient.Members() if err != nil { t.Fatalf("err: %s", err) } if len(mem) != 1 { t.Fatalf("bad: %v", mem) } m0 := mem[0] if _, ok := m0.Tags["tag2"]; ok { t.Fatalf("bad: %v", m0.Tags) } if _, ok := m0.Tags["a"]; !ok { t.Fatalf("bad: %v", m0.Tags) } if _, ok := m0.Tags["b"]; !ok { t.Fatalf("bad: %v", m0.Tags) } } serf-0.6.4/command/util_test.go000066400000000000000000000032651246721563000164550ustar00rootroot00000000000000package command import ( "fmt" "github.com/hashicorp/serf/command/agent" "github.com/hashicorp/serf/serf" "github.com/hashicorp/serf/testutil" "io" "math/rand" "net" "os" "testing" "time" ) func init() { // Seed the random number generator rand.Seed(time.Now().UnixNano()) } func testAgent(t *testing.T) *agent.Agent { agentConfig := agent.DefaultConfig() serfConfig := serf.DefaultConfig() return testAgentWithConfig(t, agentConfig, serfConfig) } func testAgentWithConfig(t *testing.T, agentConfig *agent.Config, serfConfig *serf.Config) *agent.Agent { serfConfig.MemberlistConfig.BindAddr = testutil.GetBindAddr().String() serfConfig.MemberlistConfig.ProbeInterval = 50 * time.Millisecond serfConfig.MemberlistConfig.ProbeTimeout = 25 * time.Millisecond serfConfig.MemberlistConfig.SuspicionMult = 1 serfConfig.NodeName = serfConfig.MemberlistConfig.BindAddr serfConfig.Tags = map[string]string{"role": "test", "tag1": "foo", "tag2": "bar"} agent, err := agent.Create(agentConfig, serfConfig, nil) if err != nil { t.Fatalf("err: %s", err) } if err := agent.Start(); err != nil { t.Fatalf("err: %s", err) } return agent } func getRPCAddr() string { for i := 0; i < 500; i++ { l, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", rand.Int31n(25000)+1024)) if err == nil { l.Close() return l.Addr().String() } } panic("no listener") } func testIPC(t *testing.T, a *agent.Agent) (string, *agent.AgentIPC) { rpcAddr := getRPCAddr() l, err := net.Listen("tcp", rpcAddr) if err != nil { t.Fatalf("err: %s", err) } lw := agent.NewLogWriter(512) mult := io.MultiWriter(os.Stderr, lw) ipc := agent.NewAgentIPC(a, "", l, mult, lw) return rpcAddr, ipc } serf-0.6.4/command/version.go000066400000000000000000000016371246721563000161270ustar00rootroot00000000000000package command import ( "bytes" "fmt" "github.com/hashicorp/serf/serf" "github.com/mitchellh/cli" ) // VersionCommand is a Command implementation prints the version. type VersionCommand struct { Revision string Version string VersionPrerelease string Ui cli.Ui } func (c *VersionCommand) Help() string { return "" } func (c *VersionCommand) Run(_ []string) int { var versionString bytes.Buffer fmt.Fprintf(&versionString, "Serf v%s", c.Version) if c.VersionPrerelease != "" { fmt.Fprintf(&versionString, ".%s", c.VersionPrerelease) if c.Revision != "" { fmt.Fprintf(&versionString, " (%s)", c.Revision) } } c.Ui.Output(versionString.String()) c.Ui.Output(fmt.Sprintf("Agent Protocol: %d (Understands back to: %d)", serf.ProtocolVersionMax, serf.ProtocolVersionMin)) return 0 } func (c *VersionCommand) Synopsis() string { return "Prints the Serf version" } serf-0.6.4/command/version_test.go000066400000000000000000000002401246721563000171530ustar00rootroot00000000000000package command import ( "github.com/mitchellh/cli" "testing" ) func TestVersionCommand_implements(t *testing.T) { var _ cli.Command = &VersionCommand{} } serf-0.6.4/commands.go000066400000000000000000000047171246721563000146270ustar00rootroot00000000000000package main import ( "github.com/hashicorp/serf/command" "github.com/hashicorp/serf/command/agent" "github.com/mitchellh/cli" "os" "os/signal" ) // Commands is the mapping of all the available Serf commands. var Commands map[string]cli.CommandFactory func init() { ui := &cli.BasicUi{Writer: os.Stdout} Commands = map[string]cli.CommandFactory{ "agent": func() (cli.Command, error) { return &agent.Command{ Ui: ui, ShutdownCh: make(chan struct{}), }, nil }, "event": func() (cli.Command, error) { return &command.EventCommand{ Ui: ui, }, nil }, "query": func() (cli.Command, error) { return &command.QueryCommand{ ShutdownCh: makeShutdownCh(), Ui: ui, }, nil }, "force-leave": func() (cli.Command, error) { return &command.ForceLeaveCommand{ Ui: ui, }, nil }, "join": func() (cli.Command, error) { return &command.JoinCommand{ Ui: ui, }, nil }, "keygen": func() (cli.Command, error) { return &command.KeygenCommand{ Ui: ui, }, nil }, "keys": func() (cli.Command, error) { return &command.KeysCommand{ Ui: ui, }, nil }, "leave": func() (cli.Command, error) { return &command.LeaveCommand{ Ui: ui, }, nil }, "members": func() (cli.Command, error) { return &command.MembersCommand{ Ui: ui, }, nil }, "monitor": func() (cli.Command, error) { return &command.MonitorCommand{ ShutdownCh: makeShutdownCh(), Ui: ui, }, nil }, "tags": func() (cli.Command, error) { return &command.TagsCommand{ Ui: ui, }, nil }, "reachability": func() (cli.Command, error) { return &command.ReachabilityCommand{ ShutdownCh: makeShutdownCh(), Ui: ui, }, nil }, "info": func() (cli.Command, error) { return &command.InfoCommand{ Ui: ui, }, nil }, "version": func() (cli.Command, error) { return &command.VersionCommand{ Revision: GitCommit, Version: Version, VersionPrerelease: VersionPrerelease, Ui: ui, }, nil }, } } // makeShutdownCh returns a channel that can be used for shutdown // notifications for commands. This channel will send a message for every // interrupt received. func makeShutdownCh() <-chan struct{} { resultCh := make(chan struct{}) signalCh := make(chan os.Signal, 4) signal.Notify(signalCh, os.Interrupt) go func() { for { <-signalCh resultCh <- struct{}{} } }() return resultCh } serf-0.6.4/demo/000077500000000000000000000000001246721563000134125ustar00rootroot00000000000000serf-0.6.4/demo/vagrant-cluster/000077500000000000000000000000001246721563000165335ustar00rootroot00000000000000serf-0.6.4/demo/vagrant-cluster/README.md000066400000000000000000000015071246721563000200150ustar00rootroot00000000000000# Vagrant Serf Demo This demo provides a very simple Vagrantfile that creates two nodes, one at "172.20.20.10" and another at "172.20.20.11". Both are running a standard Ubuntu 12.04 distribution, and Serf is pre-installed. To get started, you can start the cluster by just doing: $ vagrant up Once it is finished, you should be able to see the following: $ vagrant status Current machine states: n1 running (vmware_fusion) n2 running (vmware_fusion) At this point the two nodes are running and you can SSH in to play with them: $ vagrant ssh n1 ... $ vagrant ssh n2 ... To learn more about starting serf, joining nodes and interacting with the agent, checkout the [getting started guide](http://www.serfdom.io/intro/getting-started/install.html). serf-0.6.4/demo/vagrant-cluster/Vagrantfile000066400000000000000000000014131246721563000207170ustar00rootroot00000000000000# -*- mode: ruby -*- # vi: set ft=ruby : $script = <